The previous lesson established why we need normalization — activation distributions explode, vanish, and drift during training. Now let's build the BatchNorm algorithm from scratch, step by step, so every piece makes sense before the next one appears.
Batch normalization was the breakthrough that made training very deep networks practical. Before it, networks with 20+ layers were almost impossible to train reliably. After it, researchers started training 100+ layer networks. It is still a required component in most CNN architectures today.
Setup: What We're Normalizing
Consider a single layer producing output activations. We have a mini-batch of examples. For each example, the layer produces a vector of features. Let's focus on one feature across all examples, giving us values .
BatchNorm normalizes this feature using statistics computed from the current batch. The full algorithm has four steps.
The Four Steps
Step 1 — Compute batch mean:
- batch mean for this feature
- number of examples in the mini-batch
- activation value for example i
Step 2 — Compute batch variance:
- batch variance for this feature
Step 3 — Normalize:
- normalized activation for example i
- small constant for numerical stability, typically 1e-5
After step 3, each has mean 0 and variance 1 across the batch. The prevents division by zero when all values in the batch are identical.
Step 4 — Scale and shift:
- final BatchNorm output for example i
- learned scale parameter — initialized to 1
- learned shift parameter — initialized to 0
The parameters and are initialized to 1 and 0 respectively (which leaves the normalized values unchanged at the start of training).
Why γ and β Are Essential
You might think: "Doesn't adding γ and β undo the normalization?" Yes — that's exactly the point.
Without γ and β, every layer is forced to produce outputs with mean 0 and variance 1. This severely limits what functions the network can represent. Some tasks genuinely benefit from a layer outputting values with mean 5 and variance 0.1. Forcing zero mean and unit variance everywhere removes that flexibility.
With γ and β, the network can choose how much normalization to apply. If the optimal behavior for some layer is to have no normalization at all, gradient descent will push and , effectively recovering the original distribution. If some amount of normalization helps, γ and β settle at intermediate values.
Worked Example: Batch of 4
Say a single feature produces values for 4 training examples.
Mean:
Variance:
Normalize (using for clarity):
Verify: mean of = 0 ✓, variance = 1 ✓.
Scale and shift (say γ = 2, β = 1):
Now the output has mean 1 and variance 4 — controlled by γ and β.
Applying BatchNorm to Multiple Features
In practice, a layer produces an activation vector, not a single scalar. Say each example has features. BatchNorm applies the four-step algorithm to each of the C features independently. Feature 1 gets its own μ and σ² from the batch; feature 2 gets its own; and so on.
This means BatchNorm adds exactly parameters (one γ and one β per feature). For a layer with 512 features, that's 1,024 extra parameters — tiny compared to the weight matrix.
Summary of the Algorithm
The full BatchNorm operation for feature across mini-batch examples :
- Compute batch mean:
- Compute batch variance:
- Normalize:
- Scale and shift:
The learned parameters and are updated by backprop just like any other weight. The normalization in step 3 is a deterministic function of the batch, so gradients flow through it cleanly.
Next lesson: what happens to this algorithm at inference time — because there's no "batch" anymore, and this requires a careful fix.