The Setup: Binary Classification
Imagine building a spam filter. Every email either is spam (label ) or isn't (label ). You have features — word counts, sender reputation, message length — and want a model that produces the correct label.
This is : the output — exactly two choices.
Your first instinct: I already know linear regression. Why not just fit a hyperplane through the data and predict 1 if the output exceeds 0.5, predict 0 otherwise?
That instinct fails for three distinct reasons. Each failure is concrete and instructive.
Problem 1: Predictions Escape [0, 1]
Linear regression outputs . That expression is unbounded — it can produce , , or .
- model prediction - unbounded real number for linear regression
- weight vector
- bias term
These values make sense as house prices. They make no sense as probabilities. A "probability" of or is mathematically meaningless. You'd have to clamp the output to after the fact, and that clamp is completely arbitrary — the model was never trained to stay inside that range.
We want — a genuine probability that represents . Linear regression cannot guarantee this.
Problem 2: MSE Gives Wrong Gradients for Class Labels
The behaves badly when labels are 0/1. MSE — the mean squared error we used for regression — penalizes predictions proportionally to how far they are from the true label, but when labels are strictly 0 or 1, this creates specific problems.
Concrete outlier problem: Suppose you have spam emails clustered near (with label 1) and legitimate emails near (label 0). Your linear model fits this well. Now you add one extreme spam email at .
The regression line tilts dramatically to pull its prediction for closer to 1. In doing so, it pushes the predictions for away from 1 — hurting the clearly correct region. The decision boundary slides, even though the data near was already correctly classified.
MSE also incurs nonzero loss on correct confident predictions: if and , the loss is — small but nonzero. The optimizer still nudges the model even for correct answers, which can destabilize the boundary.
Problem 3: No Natural Decision Threshold
With linear regression, you'd add "predict 1 if \hat{y} > 0.5" as an external rule tacked on after training. But the model was never trained with this threshold in mind. Nothing in the MSE objective cares about where 0.5 falls relative to the outputs.
A model trained to minimize MSE on labels 0 and 1 might produce outputs primarily in — the 0.5 boundary might split the data perfectly, or it might not. The model has no idea it's supposed to be producing probabilities that straddle 0.5.
What We Actually Want
We want a model that directly outputs — the probability that example belongs to class 1. This is a number in by definition. When it exceeds a threshold (usually 0.5, but adjustable), we predict class 1.
The model needs to:
- Output a bounded value in
- Be trained with a loss designed for probabilities, not for real-valued targets
- Have a principled threshold emerge naturally from the training objective
Code: Seeing the Failures in Python
import numpy as np
# Small classification dataset: feature = hours of practice, label = pass/fail
x = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5])
y = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1]) # binary labels
# Fit a linear regression line (slope and intercept)
slope, intercept = np.polyfit(x, y, 1)
# Problem 1: predictions can escape [0, 1]
x_test = np.array([-2.0, 0.0, 6.0, 10.0])
y_pred = slope * x_test + intercept
print("Linear predictions:", np.round(y_pred, 2))
# e.g. [-0.47 0.05 1.42 2.32] — values outside [0, 1] are meaningless as probabilities
# Problem 2: an outlier shifts the decision boundary
x_outlier = np.append(x, 20.0) # add one extreme example far to the right
y_outlier = np.append(y, 1.0)
slope2, intercept2 = np.polyfit(x_outlier, y_outlier, 1)
# Where each line crosses 0.5 (the "decision boundary")
boundary_original = (0.5 - intercept) / slope
boundary_shifted = (0.5 - intercept2) / slope2
print(f"Boundary without outlier: x ≈ {boundary_original:.2f}")
print(f"Boundary with outlier: x ≈ {boundary_shifted:.2f}")
# The outlier at x=20 drags the boundary rightward, misclassifying previously-correct data
The Bridge: Sigmoid
We still want to use the linear combination — it's fast and powerful. We just need to squash its unbounded output into in a smooth, differentiable way. That's exactly what the does — the subject of the next lesson.
Interactive example
See how a regression line shifts when outliers are added vs how logistic regression handles them gracefully
Coming soon