Lab 5: Neural Networks Demystified
Objective
The Neuron: One Unit of Computation
w₁
x₁ ───────────────▶ × ⎤
w₂ ⎥
x₂ ───────────────▶ × ⎥──▶ Σ ──▶ f(Σ) ──▶ output
w₃ ⎥
x₃ ───────────────▶ × ⎦ + bias (b)output = f(w₁x₁ + w₂x₂ + w₃x₃ + b)Activation Functions: The Non-Linearity Source
Function
Formula
Shape
Used In
Layers: Building Abstraction
The Loss Function: Measuring Wrongness
Loss Function
Use Case
Backpropagation: Learning by Attribution
Optimisers: How Weights Update
Architectural Families
Convolutional Neural Networks (CNNs) — for Images
Recurrent Neural Networks (RNNs/LSTMs) — for Sequences
Transformers — for Everything
Overfitting vs Underfitting
Technique
How It Works
Summary
Concept
One-line Description
Further Reading
PreviousLab 4: Data is Everything — Datasets, Bias, and Garbage-InNextLab 6: The Transformer Revolution — Attention is All You Need
Last updated
