MATIH Platform is in active MVP development. Documentation reflects current implementation status.
14. Context Graph & Ontology
Security & RBAC
Semantic Feature Flags

Semantic Feature Flags

The SemanticFeatureFlagService provides per-tenant feature flag resolution for phased rollout of Context Graph capabilities. It supports canary deployments (10% to 50% to GA) using consistent hashing, tenant-level overrides, and environment variable overrides.


Overview

Feature flags enable safe, incremental rollout of new Context Graph features without code changes. Each feature can be independently enabled for specific tenants, rolled out to a percentage of traffic via canary mode, or globally enabled/disabled.

Source: data-plane/ai-service/src/context_graph/services/semantic_feature_flags.py


Features

FeatureDescription
semantic_sqlSemantic SQL generation
shacl_validationSHACL schema validation
ontology_watcherAutomatic ontology change detection
concept_extractionEntity concept extraction from queries
context_graph_thinkingAgent thinking trace capture
context_graph_kafkaKafka streaming for context events
context_graph_embeddingsEmbedding generation for traces
context_graph_rbacFine-grained RBAC on API endpoints

Rollout Modes

ModeDescription
DISABLEDFeature is off for all tenants
CANARYFeature is enabled for a hash-based percentage of tenants
PARTIALFeature is enabled for specific tenants only
FULLFeature is on for all tenants

Resolution Order

Feature flags are resolved in priority order:

  1. Tenant-level override -- Explicit enable/disable in the tenant configuration
  2. Environment variable override -- e.g., SEMANTIC_SQL_ENABLED=true
  3. Canary rollout logic -- Consistent hash-based percentage check
  4. Default configuration -- Built-in default state

Canary Rollout

Canary mode uses consistent hashing on the tenant ID to determine inclusion:

hash = SHA-256(tenant_id + feature_name + salt)
bucket = hash mod 100
enabled = bucket < canary_percentage

This ensures that the same tenant consistently gets the same result for a given feature and percentage, avoiding flapping.


Resolution Response

resolution = flag_service.resolve(
    feature=SemanticFeature.CONTEXT_GRAPH_THINKING,
    tenant_id="acme",
)
# FeatureFlagResolution(
#     feature="context_graph_thinking",
#     enabled=True,
#     mode="canary",
#     source="canary",
#     tenant_id="acme",
# )

Configuration

config = SemanticFeatureFlagConfig(
    default_semantic_sql_enabled=False,
    default_shacl_validation_enabled=True,
    default_ontology_watcher_enabled=True,
    canary_percentage=10,
    canary_salt="matih-canary-2025",
)

Caching

Flag resolutions are cached with a configurable TTL to avoid repeated computation:

ParameterDefaultDescription
Cache TTL60 secondsTime-to-live for cached resolutions
Cache size1000 entriesMaximum cached resolutions

Monitoring

Each flag resolution is logged for observability:

{
  "event": "feature_flag_resolved",
  "feature": "context_graph_thinking",
  "tenant_id": "acme",
  "enabled": true,
  "mode": "canary",
  "source": "canary",
  "canary_bucket": 23,
  "canary_threshold": 50
}