Lab 10: EU AI Act Compliance

Time: 50 minutes | Level: Architect | Docker: docker run -it --rm zchencow/innozverse-ai:latest bash

Overview

The EU AI Act is the world's first comprehensive AI regulation, effective August 2024. This lab covers risk classification, high-risk AI requirements, GPAI obligations, conformity assessment, and comparison with NIST AI RMF — with a practical compliance checker.

Architecture

┌──────────────────────────────────────────────────────────────┐
│                    EU AI Act Risk Pyramid                    │
├──────────────────────────────────────────────────────────────┤
│  🔴 UNACCEPTABLE RISK - PROHIBITED                          │
│  Social scoring, real-time biometric surveillance (public), │
│  subliminal manipulation, emotion recognition (work/school)  │
├──────────────────────────────────────────────────────────────┤
│  🟠 HIGH RISK - REGULATED (Title III)                       │
│  Critical infrastructure, education, employment, credit,    │
│  law enforcement, migration, justice administration          │
├──────────────────────────────────────────────────────────────┤
│  🟡 LIMITED RISK - TRANSPARENCY OBLIGATIONS                 │
│  Chatbots (disclose AI), deepfakes (label), emotion AI      │
├──────────────────────────────────────────────────────────────┤
│  🟢 MINIMAL RISK - FREE USE                                 │
│  Spam filters, AI games, recommendation systems             │
└──────────────────────────────────────────────────────────────┘

Step 1: Risk Tier Classification

Unacceptable Risk (Article 5 - PROHIBITED):

System
Prohibition

Social scoring by governments

Mass surveillance, scoring citizens by social behavior

Real-time biometric ID in public

Facial recognition in public spaces (exceptions: terrorism, missing persons)

Subliminal manipulation

Bypass conscious decision-making to harm users

Exploiting vulnerabilities

Target vulnerable groups (age, disability)

Predictive policing

Assess crime risk based on profiling

Emotion recognition (work/schools)

Analyze emotions of employees or students

High-Risk (Annex III Categories):

Category
Examples

Critical infrastructure

AI in electricity grids, water systems, transport

Education

Student admission, assessment systems

Employment

CV screening, promotion decisions, work monitoring

Essential services

Credit scoring, insurance risk, social benefits

Law enforcement

Polygraphs, deepfake detection, crime prediction

Migration

Asylum decisions, visa applications

Justice

Court decisions, evidence evaluation

Biometric categorization

Categorize people by race, religion, political views

💡 If your AI system directly influences a HIGH-RISK decision, you must comply with all Title III requirements. "Influencing" includes systems where output is not legally binding but practically determinative.


Step 2: High-Risk AI Requirements (Title III)

Organizations deploying high-risk AI must implement:

1. Risk Management System (Article 9):

2. Data Governance (Article 10):

3. Technical Documentation (Article 11):

4. Transparency (Article 13):

5. Human Oversight (Article 14):

6. Accuracy, Robustness, Cybersecurity (Article 15):


Step 3: GPAI (General Purpose AI) Obligations

GPAI models (GPT-4, Claude, Gemini, Llama 2/3) have specific obligations.

Tier 1: All GPAI models:

Tier 2: GPAI with Systemic Risk (>10^25 FLOPs training compute):

GPAI Threshold Context:


Step 4: Conformity Assessment & CE Marking

High-risk AI must undergo conformity assessment before market placement.

Assessment Paths:

System Type
Assessment Route

High-risk (most)

Internal assessment with documentation

Biometric ID systems

Third-party Notified Body assessment

Remote biometric (law enforcement)

Notified Body + national authority

GPAI systemic risk

EU AI Office oversight

CE Marking Process:

Registration Database:


Step 5: NIST AI RMF Comparison

NIST AI RMF (US, Voluntary) vs EU AI Act (EU, Mandatory):

Dimension
NIST AI RMF
EU AI Act

Nature

Voluntary framework

Legal regulation

Jurisdiction

US (global influence)

EU (extraterritorial)

Approach

Risk-based, principles

Prescriptive requirements

Core Functions

GOVERN, MAP, MEASURE, MANAGE

Risk classification → requirements

Enforcement

Self-certification

Market surveillance, fines

Fines

None

Up to €35M or 7% global revenue

Timeline

Published 2023

Effective Aug 2024 (phased)

NIST AI RMF Core Functions:

Key Alignment:


Step 6: Implementation Timeline

EU AI Act Phased Rollout:

Date
Milestone

Aug 2024

Act enters into force

Feb 2025

Prohibited practices take effect (Article 5)

Aug 2025

GPAI obligations take effect

Aug 2026

High-risk AI obligations (Annex III systems)

Aug 2027

High-risk AI obligations (Annex II systems)

Compliance Roadmap for Enterprise:


Step 7: AI Impact Assessment

Based on DPIA (Data Protection Impact Assessment) but for AI risks.

AI Impact Assessment Template:


Step 8: Capstone — Compliance Checker

📸 Verified Output:


Summary

Concept
Key Points

Risk Tiers

Unacceptable (banned) → High → Limited → Minimal

High-Risk Requirements

Risk management, data governance, documentation, transparency, human oversight

GPAI

All models need documentation; >10^25 FLOPs → systemic risk tier

CE Marking

Conformity assessment required before market placement for high-risk AI

Timeline

Prohibited: Feb 2025; GPAI: Aug 2025; High-risk: Aug 2026

NIST vs EU

Voluntary framework vs mandatory law; aligned principles, different enforcement

Fines

Up to €35M or 7% global annual turnover

Next Lab: Lab 11: AI Cost Optimization →

Last updated