BatchNorm has a critical dependency: it needs a batch of examples to compute its statistics. At inference time we patched this with running averages, but the awkwardness reveals a deeper limitation. What if we want to normalize with no batch at all?
LayerNorm solves this by normalizing along a different axis entirely.
Layer normalization stabilizes transformer training — it is a required component in every modern transformer implementation, from BERT to GPT to LLaMA. Without it, deep transformers diverge during training. This is the normalization technique you will use most if you work with language models.
Flipping the Axis
In BatchNorm, for each feature , we compute statistics by looking across all examples in the batch. In LayerNorm, for each example , we compute statistics by looking across all features.
For a single example with features, :
Step 1 — Mean across features:
- mean of all features for this single example
- number of features
Step 2 — Variance across features:
- variance of all features for this single example
Step 3 — Normalize:
- normalized i-th feature
- small constant for numerical stability
Step 4 — Scale and shift:
- LayerNorm output for feature i
- learned scale for feature i
- learned shift for feature i
Notice: everything is computed using one example's own features. No other examples involved. This means LayerNorm works identically whether your batch size is 1 or 1000.
Worked Example
Single example: (4 features).
Mean:
Variance:
Normalize (ε = 0 for clarity):
Verify: mean of = 0 ✓, variance = 1 ✓.
No other examples were needed. This is the core advantage.
Why Transformers Use LayerNorm
A transformer processes sequences: each input is a sequence of tokens, and each token is represented by an embedding vector. The shape of the activations at each layer is where is batch size, is sequence length, and is the embedding dimension.
Why not BatchNorm? If you tried BatchNorm, you'd compute statistics for position by looking at that position across all sequences. But position 3 of sentence A ("the dog sat") and position 3 of sentence B ("photosynthesis involves") are semantically unrelated — mixing their statistics is meaningless. Furthermore, varies between sequences, making it unclear how to define a "batch" across positions at all.
LayerNorm fits naturally: for each token (each [example, position] pair), normalize over the -dimensional embedding. Each token is self-contained. Variable sequence lengths are no problem.
Pre-LN vs Post-LN
The original transformer placed LayerNorm after adding the residual:
Post-LN: x → [Multi-Head Attention] → +residual → LayerNorm → output
Modern transformers use Pre-LN, which places LayerNorm before the sub-layer:
Pre-LN: x → LayerNorm → [Multi-Head Attention] → +residual → output
Why does the placement matter? In Post-LN, gradients must flow through the addition and then through LayerNorm before reaching the attention weights. For very deep networks (100+ layers), this can cause gradient instability early in training, requiring careful learning rate warm-up schedules.
In Pre-LN, each residual block starts with normalized inputs. Gradients can flow directly through the without passing through normalization. Training is more stable from the start — no warm-up required.
BatchNorm vs LayerNorm: When to Use Which
| Criterion | BatchNorm | LayerNorm |
|---|---|---|
| Works with batch size 1 | ✗ | ✓ |
| Works for variable-length sequences | ✗ | ✓ |
| Separate train/inference behavior | ✓ (complex) | ✗ (same) |
| Best for CNNs with large batches | ✓ | ✗ |
| Best for transformers/RNNs | ✗ | ✓ |
| Regularization effect | Strong | Mild |
The fundamental rule: BatchNorm when you have large fixed-size batches and no sequence structure (image CNNs). LayerNorm when you have sequences, variable lengths, or small batches (transformers, language models, on-device inference).