Upload or paste the source
Use the console to paste extracted text or upload a supported file. The current engine supports text, HTML, CSV, JSON, PDF inputs with OCR fallback for scanned pages, and Excel workbooks.
This is our working compliance product build. The immediate goal is to solve the financial-reporting review problem with a system people can actually use: gate access, ingest files, parse the contents, decide whether the input is financial, structure the sections, validate them against rules, answer reviewer questions with AI, and export explainable outputs before we host it live.
The practical objective is simple: ingest documents, decide whether they are financial, structure them, validate them against rule packs, and return explainable outputs that a reviewer can use immediately.
Enter reviewer details to begin a controlled session for document analysis, AI-assisted questions, and report export. This keeps each review traceable and tied to the right operator context.
Once the session starts, the tool accepts pasted text, PDF uploads, and Excel uploads, then it triages the document, checks structure, runs the rule pack, optionally calls the AI layer, and exports an explainable reviewer report.
This page is now the operator guide for the current build. The workflow below explains exactly how to use the engine from input to output.
Use the console to paste extracted text or upload a supported file. The current engine supports text, HTML, CSV, JSON, PDF inputs with OCR fallback for scanned pages, and Excel workbooks.
Before using the engine, the reviewer enters their details so each review can be tracked cleanly and tied to the correct context.
Before the rules pass matters, the system decides whether the content actually looks like a financial-reporting document. That prevents obvious non-financial inputs from being treated as valid reporting material.
The engine detects headings, segments the document, checks for the required reporting sections, and assigns pass, review, or fail signals with direct evidence.
The result is not just a score. It tells the reviewer what was found, what was missing, how the document was classified, and where to inspect next.
The AI layer is optional. When enabled, it can summarize the result, highlight likely risks, propose remediation steps, or answer a direct reviewer question using the extracted evidence and current analysis.
Once the result is ready, export the HTML reviewer report. It is designed for handoff and can be saved or printed to PDF if a PDF copy is required.
The release path is simple: validate the output quality first, then move the engine into a hosted environment with the right storage, audit, and access controls.
We are solving the engine in sequence: ingestion first, structured validation second, then the review interface and deployment path.
Freeze the MVP boundary against the brief so we solve the evaluation problem first, not a broader product too early.
Build the ingestion and extraction pipeline for file intake, PDF/XLSX parsing, text normalization, and structured output from mixed-format financial documents.
Segment documents into logical sections, attach metadata, then build the first rule-matching framework for completeness, integrity, and compliance checks.
Produce clear outputs that show which provisions passed, which failed, what evidence drove the result, and what can be handed off as a reviewer report.
Test on representative datasets, measure extraction and rule quality, and keep refining until the system is stable enough to host with confidence.
The hosted UI is secondary. The extraction and rule chain is the real product.
Once these are fixed, the work can move from planning into real module implementation.
The engine should never return a score without context. Every output block exists to support an operational decision.
The top result card tells you whether the document is largely compliant, needs review, or has material gaps. The coverage score is a structural signal, not a final legal opinion.
The triage layer classifies the input as financial-reporting, likely-financial, or unclear/non-financial. It also recommends whether to proceed with the current rule pack or stop for manual review.
Each rule check shows the required section, the engine verdict, the evidence used, and what was missing or weak. This is the part that makes the result explainable to an operator.
The section map shows the headings the engine recognized, how many words were found under each section, and a short excerpt so the reviewer can validate the segmentation quickly.
When AI review is enabled, the AI layer adds a second opinion on top of the rules output. It can frame the output as a summary, a risk review, a remediation plan, or a direct answer to a reviewer question.
The output should always answer the practical question: can the reviewer proceed, should the reviewer confirm the document type first, or should the input be rejected and corrected before review continues?
The product now has two layers: the deterministic compliance layer and the optional AI interpretation layer. The second layer should help classify, explain, and answer questions on top of the first.
This is the practical capability map for the engine right now: intake, document triage, structural validation, explainable results, and a path into stronger hosted infrastructure.
Hosting is not the first milestone. The engine should earn production use by producing stable, explainable, reviewer-friendly output first.
Lock the extraction, triage, and validation chain so the output is dependable
Add persistent storage, audit history, and reviewer-level access control
Expose the hosted workflow through the Future AI website under Labs
Package the hosted path and supporting documentation for submission and rollout