Lab 14: Responsible AI Audit
Overview
Architecture
┌──────────────────────────────────────────────────────────────┐
│ Responsible AI Audit Framework │
├──────────────────────────────────────────────────────────────┤
│ FAIRNESS ASSESSMENT │ EXPLAINABILITY │
│ ──────────────────── │ ────────────────── │
│ Demographic parity │ SHAP (global + local) │
│ Equalized odds │ LIME (local) │
│ Calibration │ Feature importance │
│ Disparate impact (80% rule) │ Counterfactuals │
├──────────────────────────────┴──────────────────────────────┤
│ MODEL CARD │ IMPACT ASSESSMENT │ AUDIT TRAIL │
│ Capabilities│ Stakeholder harm │ Immutable logs │
│ Limitations │ Risk register │ Decision records │
│ Metrics │ Mitigation plan │ Version history │
└──────────────────────────────────────────────────────────────┘Step 1: Fairness Metrics Taxonomy
Metric
Definition
Formula
Use Case
Step 2: Bias Detection
Source
Description
Example
Step 3: SHAP Explainability
Plot
Shows
Use
Step 4: LIME Explainability
Dimension
LIME
SHAP
Step 5: Model Cards
Step 6: AI Impact Assessment
Step 7: Audit Trail Architecture
Step 8: Capstone — Fairness Metrics Calculator
Summary
Concept
Key Points
Last updated
