If TWC is really calibrated, then your conditions 5 and 6 are false, no?
On Feb 13, 2009, at 4:28 PM, Lehner, Paul E. wrote:
I was working on a set of instructions to teach simple two-
hypothesis/one-evidence Bayesian updating. I came across a problem
that perplexed me. This can’t be a new problem so I’m hoping
someone will clear things up for me.
The problem
1. Question: What is the chance that it will snow next Monday?
2. My prior: 5% (because it typically snows about 5% of the
days during the winter)
3. Evidence: The Weather Channel (TWC) says there is a “70%
chance of snow” on Monday.
4. TWC forecasts of snow are calibrated.
My initial answer is to claim that this problem is underspecified.
So I add
5. On winter days that it snows, TWC forecasts “70% chance of
snow” about 10% of the time
6. On winter days that it does not snow, TWC forecasts “70%
chance of snow” about 1% of the time.
So now from P(S)=.05; P(“70%”|S)=.10; and P(“70%”|S)=.01 I apply
Bayes rule and deduce my posterior probability to be P(S|”70%”) = .
3448.
Now it seems particularly odd that I would conclude there is only a
34% chance of snow when TWC says there is a 70% chance. TWC knows
so much more about weather forecasting than I do.
What am I doing wrong?
Paul E. Lehner, Ph.D.
Consulting Scientist
The MITRE Corporation
(703) 983-7968
pleh...@mitre.org
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai