Lab 11: Generative Models — VAE for Anomaly Detection

Objective

Build Variational Autoencoders (VAEs) and understand the generative model landscape: VAE theory, ELBO loss, reparameterisation trick, latent space interpolation, and applying VAEs for network traffic anomaly detection.

Time: 50 minutes | Level: Advanced | Docker Image: zchencow/innozverse-ai:latest


Background

Autoencoder:   encode → bottleneck → decode  (deterministic)
VAE:           encode → μ, σ → sample z ~ N(μ,σ) → decode  (probabilistic)

Key insight: VAE learns a smooth, structured latent space.
- Normal traffic clusters tightly → low reconstruction error
- Anomalies fall outside learned manifold → high reconstruction error + KL divergence

ELBO = E[log p(x|z)] - KL(q(z|x) || p(z))
     = Reconstruction term - Regularisation term

Step 1: VAE Implementation

📸 Verified Output:


Step 2: Latent Space Visualisation

📸 Verified Output:


Step 3–8: Capstone — Real-Time VAE Anomaly Detector

📸 Verified Output:


Summary

Model
Latent Space
Anomaly Score
Best For

Autoencoder

Deterministic

Reconstruction MSE

Simple anomaly detection

VAE

Probabilistic

ELBO + reconstruction

Structured latent space, generation

β-VAE

Disentangled

Weighted ELBO

Interpretable latent factors

Further Reading

Last updated