MATIH Platform is in active MVP development. Documentation reflects current implementation status.
12. AI Service
ML Integration
ML Integration Overview

ML Integration Overview

The ML Integration module bridges the AI Service with the ML Service, enabling conversational interfaces for machine learning workflows. Through this integration, users can train models, tune hyperparameters, engineer features, serve predictions, manage the model registry, track experiments, and perform exploratory data analysis -- all through natural language or the ML API endpoints.


Integration Architecture

The ML Integration operates as a feature-flagged module within the AI Service, controlled by MODULE_ML_ENABLED:

BI Workbench / ML Workbench
         |
   AI Service (ML Module)
         |
    +----+----+----+----+----+----+
    |    |    |    |    |    |    |
 Train  Tune  Feat  Serve Reg  Track  EDA
    |    |    |    |    |    |    |
    +----+----+----+----+----+----+
         |
    ML Service (Ray AIR)

Module Components

ComponentDescriptionSource
Model TrainingSubmit and monitor training jobssrc/ml/training/
Hyperparameter TuningConfigure and launch HPO sweepssrc/ml/tuning/
Feature EngineeringFeature set creation and managementsrc/ml/features/
Model ServingDeploy models for real-time inferencesrc/ml/serving/
Model RegistryVersion, stage, and catalog modelssrc/ml/registry/
Experiment TrackingTrack runs, metrics, and artifactssrc/ml/tracking/
Exploratory Data AnalysisStatistical profiling and visualizationsrc/ml/eda/

Communication Pattern

The AI Service communicates with the ML Service over HTTP REST:

class MLServiceClient:
    def __init__(self, base_url: str):
        self.base_url = base_url  # http://ml-service:8000
 
    async def submit_training_job(self, config: TrainingConfig) -> TrainingJob:
        response = await self._post("/api/v1/training/submit", config.dict())
        return TrainingJob(**response)
 
    async def get_prediction(self, model_id: str, features: dict) -> Prediction:
        response = await self._post(f"/api/v1/serving/{model_id}/predict", features)
        return Prediction(**response)

Conversational ML

Users can interact with ML workflows through natural language in the chat interface:

User QueryML ActionAgent Involved
"Train a model to predict customer churn"Submit training jobML Training Agent
"Tune the learning rate for my churn model"Launch HPO sweepML Tuning Agent
"What features are most important for churn?"Feature importance analysisML Analysis Agent
"Deploy the best churn model to production"Model deploymentML Serving Agent
"How accurate is my deployed model?"Performance metrics queryML Monitoring Agent
"Profile the customer dataset"EDA executionML EDA Agent

Configuration

Environment VariableDefaultDescription
MODULE_ML_ENABLEDtrueEnable ML Integration module
ML_SERVICE_URLhttp://ml-service:8000ML Service base URL
ML_SERVICE_TIMEOUT30Request timeout in seconds
ML_MAX_TRAINING_JOBS5Max concurrent training jobs per tenant

Detailed Sections

SectionContent
Model TrainingTraining job submission, monitoring, and results
Hyperparameter TuningSearch strategies, parameter spaces, scheduling
Feature EngineeringFeature sets, transformations, and feature store
Model ServingDeployment, inference, and scaling
Model RegistryVersioning, staging, and lifecycle
Experiment TrackingRuns, metrics, comparisons, and artifacts
Exploratory Data AnalysisStatistical profiling, distributions, and correlations