I found this piece, with this example with some good discussion:
10 out of 1000 women at age forty who participate in routine screening have breast cancer. 800 out of 1000 women with breast cancer will get positive mammographies. 96 out of 1000 women without breast cancer will also get positive mammographies. If 1000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?
The answer is 7.8%. That's why needle biopsies are done, but they can be read wrong too. Some error is always unavoidable.
She says doctors themselves generally get such stats wrong. Here's another example:
Here is a simple example, based on Mike Shor's Java applet. Suppose you have been tested positive for a disease; what is the probability that you actually have the disease? It depends on the accuracy and sensitivity of the test, and on the background (prior) probability of the disease...
It's worth reading both brief presentations. The current thinking seems to be that Bayesian is the only reliable approach for data these days, and, if data has not been subject to it, it might not be worth much.
In my field of mental illness, the data is always so squishy to start out with that I am a skeptic about everything I read anyway. I have seen very few reports in Psychiatry which have been subject to Bayesian analysis and are thus probably not worth much.
My experience is a better teacher which is, I suppose, sort of Bayesian.