MATIH Platform is in active MVP development. Documentation reflects current implementation status.
13. ML Service & MLOps
Governance & Compliance
Fairness & Bias

Fairness and Bias

The Fairness module provides tools for detecting and mitigating bias in machine learning models. It computes fairness metrics across protected attributes (gender, race, age), identifies disparate impact, and recommends mitigation strategies. The implementation follows established fairness frameworks including demographic parity, equalized odds, and calibration.


Fairness Metrics

MetricDefinitionThreshold
Demographic ParityEqual positive prediction rate across groupsRatio above 0.8
Equalized OddsEqual TPR and FPR across groupsDifference below 0.1
Equal OpportunityEqual TPR across groupsDifference below 0.1
CalibrationEqual probability accuracy across groupsDifference below 0.05
Disparate ImpactRatio of positive outcomes between groupsRatio above 0.8
Predictive ParityEqual precision across groupsDifference below 0.1

Run Fairness Assessment

POST /api/v1/governance/fairness
{
  "model_id": "model-xyz789",
  "dataset": {
    "source": "sql",
    "query": "SELECT * FROM ml_features.loan_applications"
  },
  "target_column": "approved",
  "protected_attributes": ["gender", "age_group", "ethnicity"],
  "metrics": ["demographic_parity", "equalized_odds", "disparate_impact"],
  "threshold": 0.8
}

Response

{
  "model_id": "model-xyz789",
  "overall_fair": false,
  "groups_analyzed": 3,
  "results": [
    {
      "attribute": "gender",
      "groups": ["male", "female"],
      "metrics": {
        "demographic_parity": {"ratio": 0.92, "pass": true},
        "equalized_odds": {"tpr_diff": 0.05, "fpr_diff": 0.03, "pass": true},
        "disparate_impact": {"ratio": 0.91, "pass": true}
      },
      "fair": true
    },
    {
      "attribute": "age_group",
      "groups": ["18-30", "31-50", "51+"],
      "metrics": {
        "demographic_parity": {"ratio": 0.72, "pass": false},
        "equalized_odds": {"tpr_diff": 0.15, "fpr_diff": 0.08, "pass": false},
        "disparate_impact": {"ratio": 0.74, "pass": false}
      },
      "fair": false,
      "flagged_groups": ["51+"]
    }
  ],
  "recommendations": [
    "Age group '51+' shows significant bias in approval rate",
    "Consider reweighting training data or applying post-processing calibration"
  ]
}

Bias Mitigation Strategies

StrategyPhaseMethod
ReweightingPre-processingAdjust sample weights to equalize group representation
ResamplingPre-processingOversample underrepresented groups
Disparate Impact RemoverPre-processingTransform features to remove disparate impact
Adversarial DebiasingIn-processingAdd adversarial fairness constraint during training
Calibrated Equalized OddsPost-processingAdjust decision threshold per group
Reject Option ClassificationPost-processingGive favorable outcome to uncertain cases

Apply Mitigation

POST /api/v1/governance/fairness/mitigate
{
  "model_id": "model-xyz789",
  "strategy": "calibrated_equalized_odds",
  "protected_attribute": "age_group",
  "target_metric": "equalized_odds",
  "target_threshold": 0.1
}

Fairness Monitoring

In production, fairness metrics are tracked over time to detect drift in model fairness:

MetricFrequencyAlert Threshold
Demographic parity ratioHourlyBelow 0.8
Equalized odds differenceHourlyAbove 0.15
Disparate impact ratioDailyBelow 0.75
Group-level accuracyDailyDivergence above 10%

Configuration

Environment VariableDefaultDescription
FAIRNESS_THRESHOLD0.8Default fairness ratio threshold
FAIRNESS_PROTECTED_ATTRIBUTESgender,age,ethnicityDefault protected attributes
FAIRNESS_MONITORING_INTERVAL3600Monitoring interval in seconds
FAIRNESS_ALERT_ENABLEDtrueEnable fairness degradation alerts