Template Library Reference
The MATIH Template Library provides pre-built, parameterized starting points for common platform tasks. Templates accelerate onboarding, enforce best practices, and ensure consistency across tenant workspaces. This section catalogs every template category with detailed descriptions, parameters, and usage instructions.
Overview
Templates are stored in the templates/ directory of the platform repository, organized by discipline:
templates/
bi/ # Business Intelligence templates
data/ # Data Engineering templates
ml/ # Machine Learning templates
agentic/ # AI Agent templates
ontology/ # Ontology and knowledge graph templates
notebooks/ # Jupyter Notebook templates
spark/ # Apache Spark job templatesTemplates can be accessed through:
- Marketplace UI: Browse and install templates from the Config Service marketplace
- CLI:
matih template apply <template-name> --params key=value - API:
POST /api/v1/marketplace/templates/{id}/apply - Workbench: Template gallery in each workbench application
BI Templates (templates/bi/)
Business Intelligence templates provide pre-configured dashboard layouts, widget configurations, and data visualization patterns.
Dashboard Templates
| Template | Description | Widgets | Data Requirements |
|---|---|---|---|
executive-summary | C-suite overview dashboard with KPI cards, trend lines, and regional breakdowns | 8 widgets | Revenue, cost, user counts, time series data |
sales-pipeline | Sales funnel visualization with conversion rates and deal stages | 6 widgets | CRM data with pipeline stages and amounts |
customer-analytics | Customer segmentation, cohort analysis, and retention metrics | 10 widgets | Customer data with acquisition dates and behavior |
financial-overview | P&L statement visualization, budget vs actual, cash flow | 7 widgets | Financial data with accounts and periods |
product-metrics | Product usage analytics, feature adoption, and engagement | 9 widgets | Product telemetry and user activity data |
operational-kpis | Operations dashboard with SLAs, throughput, and error rates | 8 widgets | Operational metrics and incident data |
marketing-performance | Campaign performance, channel attribution, and ROI | 7 widgets | Marketing campaign data with spend and conversions |
hr-analytics | Headcount, attrition, compensation, and diversity metrics | 8 widgets | HR data with employee demographics and compensation |
Template Parameters (BI)
Each BI template accepts the following standard parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Dashboard name |
dataSource | string | Yes | Connector or data source to bind to |
dateColumn | string | Yes | Primary date/time column for time series |
dateRange | string | No | Default date range (last_7_days, last_30_days, last_quarter, ytd) |
currency | string | No | Currency code for financial formatting (default: USD) |
refreshInterval | integer | No | Auto-refresh interval in seconds (default: 300) |
theme | string | No | Visual theme (light, dark, corporate) |
Usage Example
POST /api/v1/marketplace/templates/executive-summary/apply
Content-Type: application/json
{
"name": "Q4 2025 Executive Summary",
"params": {
"dataSource": "sales_warehouse",
"dateColumn": "order_date",
"dateRange": "last_quarter",
"currency": "USD",
"theme": "corporate",
"revenueTable": "sales.orders",
"revenueColumn": "total_amount",
"regionColumn": "sales_region",
"productColumn": "product_category"
}
}Widget Type Reference
| Widget Type | Description | Configuration Keys |
|---|---|---|
kpi-card | Single numeric KPI with trend indicator | metric, comparison, format, thresholds |
line-chart | Time series line chart | xAxis, yAxis, series, groupBy |
bar-chart | Vertical or horizontal bar chart | xAxis, yAxis, orientation, stacked |
pie-chart | Pie or donut chart | dimension, measure, donut, showLabels |
area-chart | Stacked or regular area chart | xAxis, yAxis, series, stacked |
scatter-plot | Scatter plot with optional regression line | xAxis, yAxis, colorBy, sizeBy |
data-table | Tabular data with sorting and pagination | columns, sortBy, pageSize, conditionalFormatting |
pivot-table | Pivot table with drill-down | rows, columns, values, aggregation |
funnel-chart | Funnel/conversion chart | stages, measure, showConversion |
map | Geographic map with data overlay | geoColumn, measure, mapType, colorScale |
heatmap | Two-dimensional heatmap | xAxis, yAxis, value, colorScale |
gauge | Gauge/speedometer chart | value, min, max, thresholds |
text-block | Rich text annotation | content, format |
filter-control | Interactive filter widget | column, filterType, defaultValue |
Data Engineering Templates (templates/data/)
Data engineering templates provide pipeline definitions, transformation patterns, and data quality configurations.
Pipeline Templates
| Template | Description | Technology | Schedule |
|---|---|---|---|
elt-basic | Basic ELT pipeline: extract from source, load to staging, transform to target | Temporal + dbt | Hourly |
elt-incremental | Incremental ELT with change tracking and merge | Temporal + dbt | Every 15 minutes |
cdc-pipeline | CDC pipeline using Flink to stream database changes to Iceberg | Flink SQL | Continuous |
data-warehouse-load | Full data warehouse load with dimension/fact table patterns | Temporal + Spark | Daily |
data-lake-ingestion | Ingest files from S3/Azure Blob/GCS into Iceberg tables | Spark + Airflow | On arrival |
api-to-lake | Extract data from REST APIs and load into the lakehouse | Python + Temporal | Configurable |
streaming-aggregation | Real-time aggregation of event streams into materialized views | Flink SQL | Continuous |
data-compaction | Scheduled Iceberg table compaction and snapshot management | Spark | Daily |
Pipeline Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Pipeline name |
sourceConnector | string | Yes | Source data connector name |
targetCatalog | string | Yes | Target Iceberg catalog |
targetSchema | string | Yes | Target schema/namespace |
schedule | string | No | Cron expression for scheduling |
notifyOnFailure | boolean | No | Send notification on pipeline failure |
retryCount | integer | No | Number of retry attempts on failure (default: 3) |
timeout | integer | No | Pipeline timeout in minutes (default: 60) |
dbt Project Templates
| Template | Description | Models |
|---|---|---|
dbt-starter | Basic dbt project with staging, intermediate, and mart layers | 5 example models |
dbt-ecommerce | E-commerce analytics dbt project | 15 models (customers, orders, products, sessions) |
dbt-saas-metrics | SaaS metrics dbt project (MRR, churn, LTV, cohorts) | 12 models |
dbt-financial | Financial reporting dbt project (P&L, balance sheet, cash flow) | 10 models |
Data Quality Templates
| Template | Description | Rules |
|---|---|---|
quality-basic | Basic data quality checks (nulls, uniqueness, range) | 5 rule types |
quality-comprehensive | Full quality suite with statistical profiling and anomaly detection | 15 rule types |
quality-freshness | Data freshness and SLA monitoring | Freshness checks + SLA definitions |
quality-schema-drift | Schema drift detection and alerting | Schema comparison rules |
ML Templates (templates/ml/)
Machine learning templates provide experiment configurations, training scripts, model serving definitions, and monitoring dashboards.
Experiment Templates
| Template | Description | Framework | Task |
|---|---|---|---|
classification-tabular | Binary/multiclass classification on tabular data | scikit-learn, XGBoost | Classification |
regression-tabular | Regression on tabular data | scikit-learn, LightGBM | Regression |
timeseries-forecasting | Time series forecasting with multiple models | Prophet, ARIMA, LSTM | Forecasting |
nlp-text-classification | Text classification with transformer models | Hugging Face, PyTorch | NLP |
image-classification | Image classification with CNN/ViT | PyTorch, torchvision | Computer Vision |
recommendation-engine | Collaborative filtering recommendation system | PyTorch, Surprise | RecSys |
anomaly-detection | Unsupervised anomaly detection | scikit-learn, PyOD | Anomaly Detection |
clustering-analysis | Customer/data segmentation via clustering | scikit-learn, HDBSCAN | Clustering |
Training Job Templates
| Template | Description | Resources |
|---|---|---|
training-single-gpu | Single GPU training job | 1 GPU, 4 CPU, 16Gi memory |
training-multi-gpu | Multi-GPU distributed training | 4 GPUs, 16 CPU, 64Gi memory |
training-ray-distributed | Ray-based distributed training | Ray cluster (1 head + 4 workers) |
training-cpu-only | CPU-only training for traditional ML | 8 CPU, 32Gi memory |
hyperparameter-sweep | Hyperparameter tuning with Ray Tune | Configurable (2-16 workers) |
Model Serving Templates
| Template | Description | Serving Technology |
|---|---|---|
serve-fastapi | Simple model serving with FastAPI | FastAPI + Uvicorn |
serve-triton | High-performance serving with NVIDIA Triton | Triton Inference Server |
serve-ray | Scalable serving with Ray Serve | Ray Serve |
serve-batch | Batch inference pipeline | Spark + Temporal |
serve-ab-test | A/B testing deployment with traffic splitting | Ray Serve + Istio |
Model Monitoring Templates
| Template | Description | Metrics |
|---|---|---|
monitor-basic | Basic model performance monitoring | Accuracy, latency, throughput |
monitor-drift | Data and concept drift detection | PSI, KS test, JS divergence |
monitor-fairness | Fairness and bias monitoring | Demographic parity, equalized odds |
monitor-comprehensive | Full monitoring suite | All of the above + custom metrics |
ML Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
experimentName | string | Yes | MLflow experiment name |
modelName | string | Yes | Model registry name |
datasetPath | string | Yes | Path to training dataset (S3/ADLS/GCS) |
targetColumn | string | Yes | Target variable column name |
featureColumns | list | No | Feature column names (default: all non-target) |
testSize | float | No | Test set proportion (default: 0.2) |
randomSeed | integer | No | Random seed for reproducibility (default: 42) |
gpuCount | integer | No | Number of GPUs (default: 0) |
Agentic Templates (templates/agentic/)
Agentic templates provide pre-configured AI agent workflows and conversation patterns.
Agent Templates
| Template | Description | Agents |
|---|---|---|
chat-basic | Basic conversational Q&A agent | Intent classifier, RAG retriever, response generator |
text-to-sql-standard | Standard text-to-SQL pipeline | Intent classifier, schema retriever, SQL generator, validator, executor |
text-to-sql-advanced | Advanced text-to-SQL with disambiguation and visualization | All standard + disambiguator, visualizer, explainer |
data-analyst | Autonomous data analyst agent | Full pipeline + automated follow-up questions |
report-generator | Automated report generation from natural language | Full pipeline + report formatter, PDF generator |
custom-domain | Template for building domain-specific agents | Base agent scaffold with custom tool integration |
Agent Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Agent workflow name |
defaultDataSource | string | Yes | Default data source for queries |
defaultDialect | string | No | SQL dialect (default: trino) |
llmProvider | string | No | LLM provider override (default: tenant setting) |
llmModel | string | No | LLM model override |
enableGuardrails | boolean | No | Enable safety guardrails (default: true) |
enableStreaming | boolean | No | Enable response streaming (default: true) |
maxRetries | integer | No | Max SQL generation retries (default: 3) |
Ontology Templates (templates/ontology/)
Ontology templates provide domain-specific knowledge graph schemas.
| Template | Description | Entities | Relationships |
|---|---|---|---|
enterprise-data | General enterprise data ontology | Database, Schema, Table, Column, User, Application | contains, owns, uses, produces |
ecommerce | E-commerce domain ontology | Customer, Product, Order, Category, Review, Seller | purchases, contains, reviews, belongs_to |
financial-services | Financial services ontology | Account, Transaction, Customer, Portfolio, Risk | holds, transfers, owns, manages |
healthcare | Healthcare domain ontology | Patient, Provider, Encounter, Diagnosis, Medication | treats, prescribes, diagnoses, refers |
saas-product | SaaS product analytics ontology | User, Account, Feature, Session, Event, Subscription | uses, triggers, subscribes, belongs_to |
iot-sensor | IoT sensor data ontology | Device, Sensor, Reading, Location, Alert | measures, located_at, triggers, contains |
Ontology Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Ontology name |
namespace | string | Yes | Ontology namespace URI |
format | string | No | Output format (owl, rdf, json-ld, default: owl) |
includeShacl | boolean | No | Include SHACL validation shapes (default: true) |
mapToSchema | string | No | Database schema to auto-map entity-to-table |
Notebook Templates (templates/notebooks/)
Jupyter Notebook templates provide interactive analysis starting points.
| Template | Description | Language | Libraries |
|---|---|---|---|
eda-basic | Basic exploratory data analysis | Python | pandas, matplotlib, seaborn |
eda-advanced | Advanced EDA with profiling and correlation | Python | pandas-profiling, plotly, scipy |
sql-analysis | SQL-based analysis with Trino | Python + SQL | trino-python-client, pandas |
ml-experiment | ML experiment notebook with MLflow tracking | Python | scikit-learn, mlflow, matplotlib |
deep-learning | Deep learning training notebook | Python | PyTorch, torchvision, tensorboard |
nlp-analysis | NLP text analysis and visualization | Python | transformers, spacy, wordcloud |
geospatial | Geospatial data analysis | Python | geopandas, folium, shapely |
timeseries | Time series analysis and forecasting | Python | statsmodels, prophet, plotly |
spark-analysis | Distributed analysis with Spark | PySpark | pyspark, spark-connect |
data-quality-report | Data quality assessment report | Python | great-expectations, pandas |
Notebook Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Notebook file name |
dataSource | string | Yes | Data source connection name |
tableName | string | No | Default table to analyze |
outputFormat | string | No | Output format for reports (html, pdf) |
kernelSpec | string | No | Jupyter kernel (python3, pyspark) |
Spark Job Templates (templates/spark/)
Apache Spark job templates provide production-ready Spark application scaffolds.
| Template | Description | Language | Mode |
|---|---|---|---|
batch-etl | Batch ETL job: extract, transform, load to Iceberg | Scala/Python | Batch |
streaming-etl | Structured Streaming ETL from Kafka to Iceberg | Scala/Python | Streaming |
data-compaction | Iceberg table maintenance (compaction, snapshot expiry) | Scala | Batch |
feature-engineering | Feature computation and materialization to Feast | Python | Batch |
data-validation | Large-scale data validation with custom rules | Python | Batch |
graph-analytics | GraphX-based graph analytics | Scala | Batch |
delta-migration | Migrate data from Delta Lake to Iceberg | Scala | Batch |
Spark Template Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Spark application name |
mainClass | string | Yes (Scala) | Main class fully qualified name |
mainFile | string | Yes (Python) | Path to main Python file |
driverCores | integer | No | Driver CPU cores (default: 2) |
driverMemory | string | No | Driver memory (default: 4g) |
executorCores | integer | No | Executor CPU cores (default: 4) |
executorMemory | string | No | Executor memory (default: 8g) |
executorInstances | integer | No | Number of executors (default: 2) |
inputPath | string | Yes | Input data path |
outputPath | string | Yes | Output data path |
icebergCatalog | string | No | Iceberg catalog name (default: iceberg) |
Template Versioning
Templates follow semantic versioning. When a template is updated, existing instances are not automatically modified. Users can check for template updates via:
GET /api/v1/marketplace/templates/{id}/versionsUpdating an existing instance to a new template version is done via:
POST /api/v1/marketplace/templates/{id}/upgrade
{
"instanceId": "inst-abc123",
"targetVersion": "2.0.0",
"preserveCustomizations": true
}Custom Templates
Teams can create and publish custom templates to the Marketplace:
- Create a template definition in the appropriate
templates/subdirectory - Include a
template-manifest.yamlwith metadata, parameters schema, and documentation - Submit via
POST /api/v1/marketplace/templates - Templates undergo review before publication
# template-manifest.yaml
name: custom-sales-dashboard
displayName: Sales Dashboard (Custom)
version: 1.0.0
category: bi
author: Sales Analytics Team
description: Custom sales dashboard with regional breakdown and YoY comparison
parameters:
- name: dataSource
type: string
required: true
description: Sales data connector
- name: revenueColumn
type: string
required: true
default: total_amount
- name: dateColumn
type: string
required: true
default: order_date
tags: [sales, revenue, dashboard, bi]Template Application Workflow
When a user applies a template, the platform follows this workflow:
Step-by-Step Process
| Step | Action | Component |
|---|---|---|
| 1 | User browses or searches for templates | Marketplace UI or API |
| 2 | User selects a template and reviews its description and parameters | Workbench gallery |
| 3 | User provides parameter values (data source, names, options) | Parameter form |
| 4 | Platform validates parameters against the template's JSON Schema | Config Service |
| 5 | Platform renders the template with the provided parameters | Template engine |
| 6 | Platform creates the resulting resources (dashboard, pipeline, experiment, etc.) | Target service |
| 7 | Platform confirms creation and provides a link to the new resource | Workbench UI |
Template Rendering Engine
Templates use a Mustache-compatible rendering engine with the following built-in helpers:
| Helper | Description | Example |
|---|---|---|
{{param}} | Simple parameter substitution | {{name}} renders to My Dashboard |
{{#if param}}...{{/if}} | Conditional rendering | {{#if enableStreaming}}streaming: true{{/if}} |
{{#each items}}...{{/each}} | Iteration over arrays | {{#each columns}}{{name}}: {{type}}{{/each}} |
{{uppercase param}} | Convert to uppercase | {{uppercase name}} renders to MY_DASHBOARD |
{{lowercase param}} | Convert to lowercase | {{lowercase name}} renders to my_dashboard |
{{slugify param}} | Convert to URL-safe slug | {{slugify name}} renders to my-dashboard |
{{timestamp}} | Current ISO 8601 timestamp | 2026-02-12T10:30:00.000Z |
{{uuid}} | Generate a new UUID | a1b2c3d4-e5f6-7890-abcd-ef1234567890 |
Template Testing
Before publishing a template, run the template test suite:
cd templates/{category}/{template-name}
matih template test --params test-params.jsonThe test suite validates:
| Check | Description |
|---|---|
| Schema Validation | All required parameters are defined and typed correctly |
| Rendering | Template renders without errors with test parameter values |
| Resource Validation | Rendered resources pass the target service's validation (e.g., valid dashboard JSON, valid pipeline DAG) |
| Idempotency | Applying the same template twice with the same parameters does not create duplicate resources |
| Parameter Boundaries | Template handles edge cases (empty strings, maximum values, special characters) |
Template Categories Summary
| Category | Directory | Count | Target Service |
|---|---|---|---|
| Business Intelligence | templates/bi/ | 8 dashboard templates | BI Service |
| Data Engineering | templates/data/ | 8 pipeline + 4 dbt + 4 quality templates | Pipeline Service, Data Quality Service |
| Machine Learning | templates/ml/ | 8 experiment + 5 training + 5 serving + 4 monitoring templates | ML Service |
| Agentic AI | templates/agentic/ | 6 agent workflow templates | AI Service |
| Ontology | templates/ontology/ | 10 domain ontology templates | Ontology Service |
| Notebooks | templates/notebooks/ | 10 Jupyter notebook templates | JupyterHub |
| Spark Jobs | templates/spark/ | 7 Spark application templates | Spark Operator |
Total: 79 templates across 7 categories
Each template is maintained as part of the platform repository and follows the same CI/CD process as application code: automated testing on pull requests, versioned releases, and documentation updates.