MATIH Platform is in active MVP development. Documentation reflects current implementation status.
12. AI Service
ML Integration
Hyperparameter Tuning

Hyperparameter Tuning

The Hyperparameter Tuning integration provides automated search over model hyperparameter spaces using Ray Tune. Users can define parameter ranges, select search strategies, and configure early stopping criteria through the AI Service conversational interface or REST API.


Tuning Workflow

  1. Define space: Specify parameter ranges and distributions
  2. Select strategy: Choose a search algorithm (grid, random, Bayesian, etc.)
  3. Submit sweep: Launch the tuning job via the ML Service
  4. Monitor: Track trial progress and intermediate metrics
  5. Select best: Retrieve the best configuration and retrain

Search Strategies

StrategyDescriptionBest For
Grid SearchExhaustive search over all combinationsSmall parameter spaces
Random SearchRandom sampling from distributionsLarge parameter spaces
Bayesian (Optuna)Sequential model-based optimizationModerate spaces with expensive trials
HyperBandAdaptive resource allocation with early stoppingMany trials with limited budget
ASHAAsynchronous successive halvingDistributed tuning at scale
PBTPopulation-based trainingNeural network training

Parameter Space Definition

{
  "model_name": "churn-predictor",
  "search_strategy": "bayesian",
  "max_trials": 50,
  "metric": "f1_score",
  "mode": "max",
  "parameter_space": {
    "n_estimators": {"type": "choice", "values": [50, 100, 200, 500]},
    "max_depth": {"type": "randint", "lower": 3, "upper": 12},
    "learning_rate": {"type": "loguniform", "lower": 0.001, "upper": 0.3},
    "subsample": {"type": "uniform", "lower": 0.6, "upper": 1.0},
    "colsample_bytree": {"type": "uniform", "lower": 0.6, "upper": 1.0}
  },
  "early_stopping": {
    "patience": 10,
    "min_delta": 0.001
  }
}

Tuning API

Start Tuning Sweep

POST /api/v1/ml/tune

Get Sweep Status

GET /api/v1/ml/tune/:sweep_id

Get Best Trial

GET /api/v1/ml/tune/:sweep_id/best

List All Trials

GET /api/v1/ml/tune/:sweep_id/trials

Tuning Results

{
  "sweep_id": "sweep-abc123",
  "status": "completed",
  "total_trials": 50,
  "completed_trials": 48,
  "stopped_early": 2,
  "best_trial": {
    "trial_id": "trial-017",
    "parameters": {
      "n_estimators": 200,
      "max_depth": 8,
      "learning_rate": 0.05,
      "subsample": 0.85,
      "colsample_bytree": 0.78
    },
    "metrics": {
      "f1_score": 0.912,
      "accuracy": 0.95,
      "auc_roc": 0.97
    }
  },
  "duration_seconds": 3600
}

Scheduler Configuration

SchedulerDescription
FIFORuns trials in order (no early stopping)
MedianStoppingStops trials below median performance
HyperBandBrackets of successive halving
ASHAAsynchronous version of HyperBand

Resource Allocation

Tuning jobs run trials in parallel on Ray, with resources allocated per trial:

SettingDefaultDescription
max_concurrent_trials4Parallel trials per sweep
cpu_per_trial2CPU cores per trial
memory_per_trial4 GBMemory per trial
gpu_per_trial0GPUs per trial
max_duration_hours2Maximum sweep duration