MATIH Platform is in active MVP development. Documentation reflects current implementation status.
Blog
15. BI Products

BI Products: Dashboards That Know When They're Wrong

March 2026 · 10 min read

This is Part 4 of the Product Intelligence Series — a 10-part deep dive into treating every data, ML, AI, and BI asset as a living product with health, ownership, and lifecycle management.


Five Hundred Dashboards and Zero Trust

The data team at a Series C SaaS company maintains 512 dashboards. They know this number because someone counted during a quarterly planning exercise. They do not know which dashboards are accurate. They do not know which dashboards are viewed. They do not know which dashboards query stale data.

The CEO opens the quarterly board review dashboard. It shows ARR at 14.2M.TheCFOopensadifferentdashboardonebuiltbythefinanceteamusingadifferentrevenuecalculationandsees14.2M. The CFO opens a different dashboard — one built by the finance team using a different revenue calculation — and sees 13.8M. The VP of Sales opens a third, built on top of a Salesforce extract that has not synced in 18 days, and sees $15.1M. Three dashboards, three numbers, three definitions of revenue, zero confidence in any of them.

The board meeting derails into a 40-minute argument about which number is right. Nobody can answer because nobody knows which pipelines are fresh, which calculations are current, and which dashboards were built on the correct data.

This is not an analytics problem. It is a trust problem. And it cannot be solved by building better dashboards. It can only be solved by making dashboards aware of their own reliability.


The Static Dashboard Fallacy

A dashboard is a window into data. The problem is that the window has no idea whether the view it shows is current, complete, or correct.

When a pipeline fails at 2 AM, the dashboard does not display "STALE DATA." It displays the last successful values with full confidence. When a metric definition changes — "revenue" now includes professional services, not just subscriptions — existing dashboards keep showing the old calculation. When a source table is deprecated and replaced, dashboards that queried the old table keep working until someone drops it, at which point they break with a cryptic SQL error.

Dashboards are static artifacts in a dynamic data environment. They are photographs of a live scene — accurate at the moment they were created, progressively wrong as reality shifts.

The BI Product model changes this fundamental relationship.


The BI Product: A Dashboard That Knows Its Health

BI PRODUCT: SELF-MONITORING DASHBOARDSEvery dashboard knows its health, its owner, and its consumersVSBEFORE: STATIC DASHBOARDRevenue DashboardLast refreshed: ???Metric definition: undefinedRevenue$4.2MGrowth Rate???Stale data (3 days old)Revenue = 3 different definitions500 dashboards, 0 ownersZero consumers notified of data issuesOwner: (none)Version: (unversioned)AFTER: BI PRODUCTRevenue Dashboardv2.3Health: 0.91Finance AnalyticsData Freshness12 minQuery Performance2.1sMetric Consistency100%Usage142 views/weekDefinitionFinance-approvedSSemantic model: single source of truthRevenue defined once, referenced everywhere!Auto-alert: notifies 23 consumers on stalenessSlack, email, PagerDuty integrationsSLA: 99.5% uptimeCertified by: Data GovernanceDashboard graveyard500 dashboards, nobody trusts the numbersBI ProductsEvery metric defined once, monitored continuouslyA dashboard without health monitoring is a liability, not an asset

A BI Product wraps a dashboard with the product kernel: identity, ownership, lifecycle, and — critically — health that includes the health of its upstream dependencies.

A traditional dashboard has one health signal: does the query execute? A BI Product has five:

DimensionWhat It MeasuresImpact When Degraded
Source data healthComposite health of all upstream Data Products the dashboard queriesDashboard displays staleness banner: "Revenue data delayed 3 hours"
Metric consistencyWhether the dashboard's metric calculations match the semantic model definitionsDashboard flags divergent metrics: "This revenue calculation differs from the standard definition"
Query performanceDashboard load time at P50 and P95Dashboard displays performance warning; heavy queries are optimized or pre-aggregated
Consumer engagementView count, interaction patterns, active users over trailing 90 daysDashboard flagged for deprecation review if zero viewers for 90 days
Visual accuracyWhether chart types and aggregations match the underlying data types and cardinalitiesBar chart on a 10,000-category dimension is flagged for visual redesign

The transformative dimension is source data health. A BI Product declares which Data Products it consumes — explicitly, through its query lineage. When the customer_transactions Data Product drops from 0.98 health to 0.45 because of a freshness SLA breach, every BI Product that queries it is automatically degraded. The dashboard does not show wrong numbers with confidence. It shows a warning: "Transaction data is 6 hours stale. Values shown reflect data as of 06:00 UTC."

This is the connection between Part 1 (Data Products) and Part 4 (BI Products). The health propagation chain — from source data through pipelines through dashboards — is what makes the entire system trustworthy. Not because failures do not happen, but because failures are visible the moment they occur.


The Semantic Model: One Source of Truth for Metrics

The three-numbers-for-revenue problem is not a dashboard problem. It is a metric definition problem. Three teams defined "revenue" differently because there was no shared, governed, versioned definition.

A BI Product does not define its own metrics. It binds to a semantic model — a centralized registry of metric definitions that serves as the single source of truth.

A metric in the semantic model specifies:

  • Name and description: "Monthly Recurring Revenue (MRR)" with a business-readable explanation
  • Calculation: The exact SQL or expression that computes the metric, pinned to specific Data Products
  • Dimensions: The dimensions along which the metric can be sliced (region, product line, customer segment)
  • Filters: Default filters that are always applied (e.g., exclude internal test accounts)
  • Owner: The team responsible for maintaining the definition
  • Version: A semantic version number that increments when the calculation changes

When a BI Product binds to a metric, it uses the semantic model's calculation, not its own. If three dashboards display "MRR," all three display the same number because all three are computing from the same definition. If the definition changes — "MRR now includes professional services" — the change is versioned, and every consuming BI Product is notified.

Metric Versioning: No More Silent Changes

Metric versioning is the guardrail against the scenario where a finance team changes the revenue calculation and nobody downstream knows.

When a metric version changes:

  1. All consuming BI Products receive a notification: "MRR definition updated from v2.1 to v3.0. Change: professional services revenue included. Previous values will differ."
  2. BI Products can pin to a specific version or float to latest. Pinned products continue using the old definition until the owner explicitly upgrades.
  3. A comparison view shows the difference: "MRR v2.1: 14.2M.MRRv3.0:14.2M. MRR v3.0: 15.8M. Delta: +$1.6M from professional services inclusion."

The CEO's board review dashboard never shows a number without provenance. The metric definition, version, last update time, and upstream data health are all available in a single click.


NLP Dashboard Creation: From Question to Chart

Building a dashboard should be as easy as asking a question. The BI Product model enables this through natural language composition, powered by the semantic model.

"Show me MRR by region and customer segment, trending monthly for the past 12 months."

The platform processes this request in four steps:

1. Semantic resolution. "MRR" is resolved to the semantic model's monthly_recurring_revenue metric, version 3.0. "Region" and "customer segment" are resolved to dimensions defined in the metric. "Monthly" and "past 12 months" are resolved to a time dimension with monthly granularity and a 12-month lookback.

2. Template matching. The combination of one metric, two dimensions, and a time series matches the "trend comparison" dashboard template. This template specifies a line chart for the time series with a grouped bar chart for the current-period breakdown.

3. Widget binding. Each chart widget is bound to the semantic model query. The SQL is generated from the metric definition, dimension joins, and time filters. The query runs against the upstream Data Products, inheriting their freshness SLAs and health scores.

4. Preview and publish. The user sees the rendered dashboard in preview mode. They can adjust chart types, add filters, modify the time range, or add additional metrics. When satisfied, the dashboard is published as a BI Product — with health monitoring, consumer tracking, and lifecycle management from the first moment.

A dashboard that would take a BI analyst 2-3 days to build — finding the right tables, writing the SQL, choosing the chart types, configuring the filters — takes 30 seconds to describe and 5 minutes to refine.


Dashboard Retirement: The End of the Graveyard

The 512-dashboard problem is not just an accuracy problem. It is a resource problem. Every dashboard consumes compute when it refreshes. Every dashboard consumes attention when it appears in search results. Every unmaintained dashboard is a potential source of wrong decisions.

BI Products solve this through lifecycle enforcement:

90-day engagement check. If a BI Product has zero viewers for 90 consecutive days, it is automatically flagged for deprecation review. The owner receives a notification: "Executive Revenue Summary has had no viewers in 90 days. Review for deprecation or provide justification."

Deprecation workflow. If the owner confirms deprecation, the BI Product moves to DEPRECATED status. Any remaining subscribers (teams who have it bookmarked or embedded) receive a notice with a 30-day migration window. During this window, the dashboard displays a deprecation banner.

Retirement. After the migration window closes, the BI Product moves to RETIRED status. The dashboard is removed from active catalogs. Its definition, query history, and metadata are archived for lineage purposes. Compute resources are reclaimed.

In the first quarter after implementing this lifecycle, the typical enterprise retires 30-40% of its dashboards. Not because they were bad — because nobody was using them. The compute savings alone often cover the cost of the platform.


Template Gallery: Start with Industry Patterns

Not every dashboard needs to be built from scratch. The BI Product model includes a template gallery — industry-specific dashboard templates that encode domain expertise.

SaaS Unit Economics. MRR waterfall, net revenue retention, LTV:CAC ratio, expansion vs. contraction revenue, cohort analysis. Pre-configured with the semantic model metrics that SaaS companies typically need. A new SaaS customer can deploy a complete unit economics dashboard suite in under an hour.

Healthcare KPIs. Patient volume trends, average length of stay, readmission rates, department utilization, clinical outcome metrics. Pre-configured with HIPAA-compliant data access patterns and PHI redaction rules.

Financial P&L. Revenue breakdown by product line, cost of goods sold, operating expenses, EBITDA margin, cash flow waterfall. Pre-configured with standard accounting dimensions and period-over-period comparison.

Retail Operations. Same-store sales, inventory turnover, basket size trends, customer acquisition cost by channel, seasonal demand forecasting. Pre-configured with common retail calendar adjustments (fiscal year, holiday periods).

Templates are not static. They are living BI Products maintained by verified publishers in the marketplace (see Part 3). When a template publisher updates a metric definition or adds a new chart, installing tenants receive the update through the standard versioning mechanism.


The Honest Dashboard

Return to the board meeting. In a world where dashboards are BI Products:

The CEO opens the quarterly review dashboard. It displays MRR at $15.8M (semantic model v3.0, including professional services). Below the number, a health indicator shows green: all upstream Data Products are healthy, all pipelines are fresh, the metric definition is version 3.0 (latest). The CFO's dashboard shows the same number because it uses the same metric definition. The VP of Sales's dashboard shows a yellow banner: "Salesforce pipeline data is 18 days stale. Revenue shown reflects data as of February 26."

The board meeting discusses strategy, not data quality. The 40-minute argument never happens. Not because the data is perfect — the Salesforce sync is still broken — but because the dashboard is honest about what it knows and what it does not know.

That is the BI Product promise. Not perfect dashboards. Honest dashboards. Dashboards that know their own health, declare their sources, and tell you when something is wrong before you make a decision based on wrong numbers.


What Comes Next

We have now covered all four product types: Data Products (Part 1), ML Products (Part 2), AI Products (Part 3), and BI Products (Part 4). Each is valuable on its own. But the real power emerges when you connect them. In Part 5, we explore The Contextual Graph — the nervous system that connects every product to every other product, enables impact analysis in seconds instead of hours, and propagates health signals from source to consumer in real time. This is the missing layer that turns four independent product types into a single, self-aware platform.


This is Part 4 of the Product Intelligence Series. Previous: AI Products: Composing Agents Like Microservices. Next: The Contextual Graph: Your Data Platform's Nervous System.