Blog

Articles

The future of fi...

Articles

The future of financial reporting is not a chart.

headshot photo of Dan Meshkov

Dan Meshkov

·

Mar 01, 2026

Mar 01, 2026

Answers matter more

It's 4:47 PM on a Thursday. The CFO wants to know, "Why is marketing 23% over budget this quarter?" The controller pulls up the dashboard. There it is: a bar chart, spend by department, with marketing glaringly over budget. But the chart doesn't say why.

So someone has to open a ticket for the data team. The data team writes a query. They analyze the results. Finance finally gets an answer two days later.

At Brex, we believe answers should take two seconds. So we built Spaces, an AI-powered workspace for finance teams that goes beyond charts to deliver powerful insights, instantly.

Spaces in action:

Spaces Reporting Gif

Traditional reports are a bottleneck

Every finance team runs on reports. Budget tracking, spend breakdowns, compliance flags, board prep, they're the operating system of financial decision-making. And yet the tools behind them haven't fundamentally changed in two decades. Pre-built dashboards are fast but rigid. Ad-hoc SQL is flexible but requires a data team and a ticket queue. Neither scales with how fast business actually moves.

The pain compounds at exactly the wrong moment. When someone asks a new question, the entire cycle restarts: file a request, wait for an analyst, validate the output, paste it into a slide. The people who need answers fastest are the ones waiting longest.

We saw an opportunity to break that loop entirely: give every Brex customer an AI financial analyst embedded in the product, one that understands their data, speaks their language, and delivers answers in seconds.

Why we started with ontology, not UI

Before we wrote a line of product code, we had to solve a problem that doesn't show up in any mockup: the AI needs to understand financial data the way a senior analyst does.

Raw database schemas are meaningless to an LLM. Column names like amt_usd or dept_id carry no business context. But a semantic layer, a structured model that knows "spend" is a measure, "department" is a dimension, and "budget variance" is spend minus allocation, gives the AI the same fluency a finance hire develops over months of onboarding.

This is what lets Spaces understand Brex-specific concepts natively: card programs, expense policies, entity structures, approval workflows. For our customers, the quality of this model determines the quality of the answers they get. A well-modeled ontology is the difference between "I don't understand that question" and a correct, contextualized answer with a chart and a clear explanation.

Building this layer for an external product means it has to be multi-tenant by design because every customer's data is isolated, but the model is shared. It has to be domain-specific too. We encode Brex financial concepts as first-class dimensions and measures, which is what makes Spaces a Brex product and not a generic BI tool. And it has to evolve as we ship new card types, policy controls, and integrations.

Few companies have built production semantic layers for external, AI-native products at this scale. The external product challenge where accuracy expectations are higher, UX needs polish, and customer trust is on the line is a frontier we're actively navigating.

The product bet: insights over charts

The shift we're making is all about our belief that the future of reporting is not a chart. It's an insight.

Charts are presentation. Insights are understanding. A bar chart showing spend by department is useful. But an AI that tells you why is a different category of product.

Spaces delivers both, but the insights block is the real breakthrough. The AI doesn't just query data and render a visualization. It analyzes what it finds, surfaces anomalies, and summarizes them in natural language: what happened, why it likely matters, and what to look at next. It contextualizes results against history, benchmarks, and expectations.

For a CFO reviewing quarterly numbers, the insight block acts as the analyst who would normally annotate a slide deck. For a controller monitoring daily spend, it eliminates the need for a manual scan across dozens of cost centers.

This is where AI-native reporting diverges from traditional BI. Dashboards show you data. Spaces tells you what the data means.

From vibe-coded prototype to production

To build, we started by learning. We evaluated semantic layer providers, including Cube, dbt Semantic Layer, LookML, and others, with AI readiness as the primary filter. Does the provider have an AI agent layer? Can we inject context like certified queries and prompt guidance? Does the semantic model support the expressiveness our product needs?

We stood up an open-source Cube deployment for hands-on learning. The goal was to validate a thesis: can an AI agent, backed by a semantic layer, reliably answer real financial questions?

Then we built what we'll charitably call a "vibe-coded" internal prototype to test the full experience. Internal teams at Brex were the first users: finance, ops, analytics. The feedback reshaped the product. Users didn't just want a single answer. They wanted to refine, filter by date, drill into a segment, compare periods. Conversational refinement became a core product pattern.

We then took the prototype to external customers. The interaction patterns validated. The infrastructure didn't. So we rebuilt for production scale, multi-tenancy, and reliability.

Context engineering: where the moat lives

Much like Claude Code for software development, AI data agents are becoming a capability companies don't need to build from scratch. The LLM can write SQL. The differentiator is context engineering: how you shape what the agent knows and how it reasons.

We invest in three layers of context:

The proprietary semantic layer is the foundation. Our Cube models encode Brex's financial domain: what "spend" means, how departments roll up, what compliance thresholds are, how entity structures map to accounting hierarchies. This is what turns a generic SQL agent into a Brex financial analyst.

Certified queries are pre-validated query patterns the AI can reference — gold-standard SQL, tested results, approved by domain experts. When a customer asks something similar, the agent adapts from a certified query rather than generating from scratch. This dramatically reduces hallucination and improves consistency for the financial questions customers ask most.

Prompt engineering and guidance shapes the agent's behavior at the system level: how to generate insights (not just data), when to flag anomalies, how to contextualize results against historical patterns, and what output format constraints to follow. The prompt layer evolves continuously alongside the semantic model.

The way we think about it: the LLM is the engine, the semantic layer is the map, certified queries are the landmarks, and prompting is the driving style. Each layer compounds the others.

Content architecture:

Spaces Reporting Content Architecture

Shipping AI to customers with confidence

You can't ship AI-native financial reporting without rigorous quality assurance. The stakes are too high; customers are making real decisions based on these numbers. We built an evaluation system organized around three pillars.

Tool execution asks: does the agent produce valid, executable SQL? Does it query the right tables and apply correct filters? Are the results semantically correct for what the user asked? If the SQL is wrong, nothing downstream matters.

Data presentation evaluates the product's core value. Are the insights relevant and non-speculative? Does the AI correctly identify anomalies? Is the chart type appropriate? Are insights consistent with the underlying numbers? This is where we test not "did it run" but "did it help."

Style and structure ensures product polish at scale. Does the response follow format requirements? Are the cards structured correctly? Is the presentation clean and consistent?

The evaluation suite covers more than 20 real-world financial scenarios, such as revenue breakdowns, compliance violations, spend outliers, and budget tracking. It runs before every significant change to the semantic model or agent integration. It's eval-driven development applied to AI product quality.

What we've learned so far

Spaces is a product in active development, and the frontier of AI-native financial reporting shifts every month. But a few convictions have solidified:

The future of reporting is insights, not charts. AI that finds anomalies and summarizes what matters is a step change from dashboards that display data. The chart is the artifact. The insight is the product.

The semantic layer is a product feature. For external, AI-powered products, ontology quality directly determines customer experience.

Context engineering is the moat. Certified queries, prompt engineering, and a proprietary semantic model turn a generic AI agent into a domain expert. The LLM is commodity. The context is defensible.

Prototype fast, rebuild for production. The UX insights survive. The internals get rewritten. Optimize for learning speed first, production reliability second.

Eval-driven development is non-negotiable. When customers make financial decisions based on your AI's output, you need systematic quality assurance across execution, presentation, and style — running continuously.

The bar chart isn't going away. But the two-day wait for an explanation of what it means? That's what Spaces is here to eliminate.

Summer release_pre-footerSummer release prefooter mobile

See what Brex can do for you.

Discover how Brex can help you eliminate finance busywork, do more with less, and accelerate your impact.

Get started
BRX-orange-cushion-pre-footer-spring
BRX-orange-cushion-pre-footer-spring

See what Brex can do for you.

Discover how Brex can help you eliminate finance busywork, do more with less, and accelerate your impact.

Get started