How far can values stray?
You run a web server. The average response time is 200ms. Your boss asks: "What fraction of requests take more than 1 second?" You don't know the full distribution — just the mean.
Can you still say something useful?
Markov's inequality
If and :
That's it. One line. The only requirement: is non-negative.
The intuition
If the average is , then values can't mostly be far above — there wouldn't be enough "mass" left for the average to work out. Markov makes this precise.
Proof (three lines)
Chebyshev's inequality
Markov only uses the mean. If you also know the variance, you can do much better.
For any random variable with mean and variance , and for any :
Equivalently: .
Proof from Markov
Comparing bounds to reality
Toggle between Markov and Chebyshev. Watch how the bound compares to the exact probability. Chebyshev is always tighter because it uses more information (the variance), but both are conservative — that's the price of distribution-free guarantees.
| (σ's from mean) | Chebyshev bound | Normal exact | Bound / Exact |
|---|---|---|---|
| 1 | 100% | 31.7% | 3.2× |
| 2 | 25% | 4.6% | 5.5× |
| 3 | 11.1% | 0.27% | 41× |
| 4 | 6.25% | 0.006% | 988× |
The bounds get looser as grows. That's not a bug — Chebyshev must hold for every distribution. The Normal is well-behaved; the bound accounts for the worst-case shape.
When to use which
| Situation | Use |
|---|---|
| Only know E[X], X ≥ 0 | Markov |
| Know E[X] and Var(X) | Chebyshev |
| Know the distribution | Compute exactly! |
Markov requires . Chebyshev works for any distribution with finite variance. Neither requires knowing the distribution's shape.
Applications
Quality control
A machine produces bolts with mean diameter 10mm and SD 0.1mm. Chebyshev guarantees at most fraction deviate by more than mm.
Financial risk
A portfolio has expected return 8% and SD 15%. Markov: P(loss > 20%) needs care (shift by mean). Chebyshev: P(|return − 8%| > 30%) ≤ 1/4.
See how tight (or loose) these bounds are across different distributions:
Summary
| Inequality | Statement | Requires |
|---|---|---|
| Markov | ||
| Chebyshev | Finite variance | |
| Key idea | Bound tail probabilities without knowing the distribution |
Markov and Chebyshev are blunt instruments — and that's the point. They work for any distribution meeting their minimal assumptions. When you know more about the distribution, you can do better. When you don't, these are your safety net.
What's next
Chebyshev says the sample average probably stays near the true mean. The Law of Large Numbers makes this precise: the average converges to the mean as the sample grows.