Bayes' Rule

What do you think?
You test positive for a rare disease. The test is 99% accurate. How worried should you be?

You know that P(AB)P(A|B) and P(BA)P(B|A) are different. But here's the problem: you usually know one and need the other.

A doctor knows P(positive test | disease) from clinical trials. What you need is P(disease | positive test).

Can we flip the conditional around?

Deriving the formula

Building Bayes' Rule
P(HE)=P(H)P(EH)P(H \cap E) = P(H) \cdot P(E|H)
Start with the multiplication rule. The probability of both H and E is the prior times the likelihood.
Step 1 of 4
Bayes' Rule

P(HE)=P(EH)P(H)P(E)P(H|E) = \Large\frac{P(E|H) \cdot P(H)}{P(E)}

Where:

  • P(H)P(H) is the prior: your belief before seeing evidence
  • P(EH)P(E|H) is the likelihood: how probable the evidence is if H is true
  • P(E)P(E) is the marginal likelihood: how probable the evidence is overall
  • P(HE)P(H|E) is the posterior: your belief after seeing evidence

Explore the formula

Adjust the three key values and watch how they interact.

Bayes' Explorer
1%99%
1%100%
0%100%
Likelihood Ratio: 4.50×

P(H|E)

33.3%

10% → 33.3%

If you double the prior P(H), what happens to the posterior (approximately)? (one word)
1/2

Visualizing base rates

Out of 100,000 people, only 100 have the disease. The test is 99% accurate. If you test positive, how many others also tested positive but are healthy?

Frequency Grid
True+ (99) False+ (999)

P(Disease | Positive)

9.0%

99 / 1098 positive tests

What do you think?
Why does a positive test give only ~9% chance of disease?

The medical test calculation

Here's the calculation step by step.

Calculating P(Disease | Positive)
P(E)=P(EH)P(H)+P(E¬H)P(¬H)P(E) = P(E|H) \cdot P(H) + P(E|\neg H) \cdot P(\neg H)
First, find the total probability of testing positive using the law of total probability.
Step 1 of 5
If sensitivity increased to 99.9%, what would P(Disease|Positive) be? (nearest whole %) %(whole number)

The odds form

There's a more intuitive way to think about Bayes:

Bayes' Rule (Odds Form)

P(HE)P(¬HE)=P(H)P(¬H)×P(EH)P(E¬H)\Large\frac{P(H|E)}{P(\neg H|E)} = \frac{P(H)}{P(\neg H)} \times \frac{P(E|H)}{P(E|\neg H)}

In words: Posterior Odds = Prior Odds × Likelihood Ratio

What do you think?
You start with 1:99 odds against a disease (1% prior). A test with likelihood ratio of 99 comes back positive. What are your new odds?
The Evidence Bar
0.1%50%
0%1.0%100%
????
????
????
Bayesian Updater
0.1%50%
250100
Prior
1.0%
1.0%
Posterior after 0 updates

The likelihood ratio tells you how much the evidence should shift your belief. If evidence is 10× more likely when H is true, multiply your odds by 10.

When intuition fails

Bayes' Rule fixes common mistakes. Try each scenario:

What do you think?
Airport security flags passengers as 'suspicious' with 95% accuracy. Terrorists are 1 in 10 million. At a major airport, 1000 people are flagged daily. How many are actually terrorists?
What do you think?
A mammogram has 90% detection rate and 7% false positive rate. For women in their 40s (1.4% prevalence), a positive test means what probability of cancer?
%
Enter a whole number from 0 to 100
What do you think?
DNA evidence matches the defendant with '1 in a million' accuracy. In a city of 10 million, what does this prove?

The prior matters

| Prior P(H)P(H) | Likelihood Ratio | Posterior P(HE)P(H|E) | | --- | --- | --- | | 0.1% (very rare) | 99 | 9% | | 1% (rare) | 99 | 50% | | 10% (somewhat common) | 99 | 92% |

The same evidence, the same test accuracy, but wildly different conclusions. The prior is often more important than the test quality!

Prior → Posterior
30×1000×
0.1%50%
0%50%100%
Prior: 1.0%Posterior: 50.3%
50.3%
Posterior crosses 50% when prior > 0.99%

Why it's hard

Your brain isn't wired for Bayes' Rule. We see "99% accurate test" and think "99% chance I have it."

Evolution optimized us for pattern recognition, not statistics. When you see smoke, you think fire because P(smoke|fire) is high. But in a building with a smoke machine, P(fire|smoke) drops to near zero.

The fix: Always ask "how common is the hypothesis before I saw this evidence?"

Calibration Challenge
1 / 5
Disease: 0.1% prevalence, test 99% accurate, you test positive.
Your guess:
Enter a whole number from 0 to 100

Test your understanding

Disease affects 2% of population. Test: 98% sensitivity, 5% FPR. You test positive. P(disease|+)? (nearest %) %(whole number)
Prior odds 1:9 against, likelihood ratio = 3. What are posterior odds? (ratio, e.g. 2:5)
True or False: A 99% accurate test means a positive result has 99% chance of being correct. (true or false)

What's next

Bayes' Rule tells you how new information changes your beliefs. But sometimes new information doesn't change anything at all.

If I tell you it's raining in Seattle, does that change the probability your coin flip lands heads? No. The events are independent.

But here's where people get confused: independence is not the same as being mutually exclusive (disjoint). In fact, if two events are mutually exclusive, they're highly dependent. Learning one happened tells you the other definitely didn't.

That's the next lesson.