The Idea: Learn to Forget Wisely
An autoencoder learns by solving an unusual task: compress data, then reconstruct it exactly. If that sounds circular, it is — by design. The magic is the bottleneck in the middle that forces compression.
Autoencoders are used in practice for denoising images, detecting anomalies in industrial sensors, and compressing data for storage. They are also the direct precursor to VAEs — the architecture that made controllable latent space manipulation possible. Understanding autoencoders is the first step toward understanding every modern generative model.
The architecture has two parts:
- encoder with parameters θ, maps input to latent code
- decoder with parameters φ, maps latent code back to input
- original input (e.g., a 784-dimensional image)
- latent code — much lower-dimensional than x
- reconstruction — the decoder's output
The has far fewer dimensions than . If (a 28×28 MNIST image) and , the bottleneck compresses by a factor of 24×.
The Reconstruction Loss
Training minimizes how different the reconstruction is from the original:
- total reconstruction loss over n training examples
- the i-th training example
- its reconstruction
- number of training examples
This is pixel-wise MSE. For binary data (black-and-white images, 0/1 pixels), binary cross-entropy is often preferred because pixels are probabilities rather than continuous values.
Concrete example. Suppose has five pixels with values and the reconstruction is . The squared errors are:
Loss . Gradient descent nudges and to shrink this number.
What the Bottleneck Forces
Without the bottleneck, the network could copy straight through — a trivial identity mapping. The narrow middle layer prevents this. The encoder must find a lower-dimensional summary that retains enough information to reconstruct the input from the decoder side.
The : you keep what reconstruction needs; you discard what it doesn't. For natural images this means edges, colors, shapes — not pixel-level noise.
Practical Applications
Denoising Autoencoders
Corrupt the input: add Gaussian noise or randomly zero out pixels to get . Feed to the encoder but train to reconstruct the clean :
- corrupted input
- clean target
The model learns to undo noise, which forces the latent code to represent only genuine signal. At test time, feed in a noisy image and get a clean reconstruction.
Anomaly Detection
Train on normal examples only (e.g., healthy medical images). The encoder-decoder pair learns a representation of normality. At test time, compute reconstruction error for a new sample. An anomaly lies off the learned manifold — the decoder cannot reconstruct it well, so error spikes. Threshold the error to flag anomalies.
Feature Learning
The latent code is a compressed feature vector. These features can be used as inputs to a downstream classifier, often outperforming raw pixels — especially when labeled data is scarce but unlabeled data is plentiful.
The Fatal Flaw for Generation
Here is the problem: the latent space has no structure. The encoder maps inputs to latent codes through a deterministic function trained only to minimize reconstruction. Two nearly identical images might land at very different points in . Large regions of latent space may correspond to nothing — the decoder was never trained on those values.
If you try to generate by sampling and decoding, you will mostly get incoherent noise, because most random points in are far from any encoded training example.
The fix: constrain the encoder to produce latent codes that follow a known distribution — specifically . If every point in latent space corresponds to plausible data, sampling becomes meaningful. This is the Variational Autoencoder, and building it carefully is the work of the next three lessons.
Interactive example
Encode MNIST digits and visualize the 2D latent space — note the unstructured scatter of class clusters
Coming soon