Adaptive scoring
A basketball player takes 100 free throws. She scores on her first throw and misses her second. From throw 3 onward, her probability of scoring equals the fraction of throws she's made so far.
For example, after 40 throws with 23 made, the probability of scoring on throw 41 is . Step through the first 20 throws to see how the probability adapts — notice how a hot streak raises your next-throw probability, creating a self-reinforcing loop:
Run the simulation and check the distribution. It looks flat — every score from 1 to 99 is equally likely.
Starting small
Don't let the number 100 intimidate. Start with the smallest case and look for a pattern.
After throws (first throw: score, second throw: miss):
The third throw has probability of scoring (1 make in 2 throws).
| Outcome of throw 3 | Baskets after 3 | P |
|---|---|---|
| Score | 2 | 1/2 |
| Miss | 1 | 1/2 |
So and . Both equally likely! And with .
After throws:
After 4 throws: . The pattern holds!
After 4 throws: P(4, k) = 1/3 for all k = 1, 2, ..., 3
Each possible basket count is equally likely!
Slide the slider to see: for every , the distribution is perfectly flat. This is not coincidence — it's provable by induction.
The induction proof
After throws (with the first a score and second a miss, and adaptive probability from throw 3 onward), the number of baskets satisfies for each .
Base case: gives . ✓
Inductive step: Assume for all . We need to show .
The cancellation makes it work. The "miss" term contributes and the "score" term contributes . Together they sum to , which is independent of . This uniformity is maintained at every step.
So for all valid , completing the induction. For :
Why is this surprising?
The adaptive probability creates a self-reinforcing process. When the player is doing well (high scoring rate), she's more likely to keep scoring. When she's doing poorly, she's more likely to keep missing. This is a Polya urn process — it's like adding a red ball to an urn every time you draw red, and a blue ball every time you draw blue. Watch multiple games unfold simultaneously:
Intuitively, you'd expect this reinforcement to create extreme outcomes — lots of games with very high or very low scores. And indeed, each individual score is just 1/99 ≈ 1%. But the remarkable fact is that no score is more likely than any other. The reinforcement doesn't create peaks; it creates perfect uniformity.
Start with red and blue balls. Draw a ball, note its color, return it along with one new ball of the same color. This is equivalent to the basketball scoring process (with "score" and "miss" initially). The limiting fraction of red balls is uniformly distributed on .
Competition appearances
This problem appears in quant interviews (Two Sigma, DE Shaw), in Putnam competition problems about sequences with adaptive probabilities, in Pólya urn theory as a foundational example of stochastic processes, and in machine learning where the Chinese Restaurant Process and Dirichlet-Multinomial models are generalizations.
Don't be intimidated by large numbers. Start with the smallest case, compute, look for a pattern, and prove it by induction. The number 100 was a red herring.
The takeaway
Two techniques drive the solution: start small and find patterns before tackling the full problem, then use induction to confirm the pattern. The uniformity comes from a perfect cancellation in the law of total probability. In competitions, when the problem gives you a large number (100, 1000, etc.), it's almost always a hint to solve the small case first.