We are a commune of inquiring, skeptical, politically centrist, capitalist, anglophile, traditionalist New England Yankee humans, humanoids, and animals with many interests beyond and above politics. Each of us has had a high-school education (or GED), but all had ADD so didn't pay attention very well, especially the dogs. Each one of us does "try my best to be just like I am," and none of us enjoys working for others, including for Maggie, from whom we receive neither a nickel nor a dime. Freedom from nags, cranks, government, do-gooders, control-freaks and idiots is all that we ask for.
I found this piece, with this example with some good discussion:
10 out of 1000 women at age forty who participate in routine screening have breast cancer. 800 out of 1000 women with breast cancer will get positive mammographies. 96 out of 1000 women without breast cancer will also get positive mammographies. If 1000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?
The answer is 7.8%. That's why needle biopsies are done, but they can be read wrong too. Some error is always unavoidable.
She says doctors themselves generally get such stats wrong. Here's another example:
Here is a simple example, based on Mike Shor's Java applet. Suppose you have been tested positive for a disease; what is the probability that you actually have the disease? It depends on the accuracy and sensitivity of the test, and on the background (prior) probability of the disease...
It's worth reading both brief presentations. The current thinking seems to be that Bayesian is the only reliable approach for data these days, and, if data has not been subject to it, it might not be worth much.
In my field of mental illness, the data is always so squishy to start out with that I am a skeptic about everything I read anyway. I have seen very few reports in Psychiatry which have been subject to Bayesian analysis and are thus probably not worth much.
My experience is a better teacher which is, I suppose, sort of Bayesian.
Neither of those strike me as particularly good examples of Bayesian reasoning, they are easily handled as traditional statistics problems. The conclusion about 'only reliable approach' is nonsense. Bayesian statistics easily lends itself to abuse via poorly chosen priors, crappy models, and old fashioned incompetence. Intoning of the magical word 'Bayesian' and waving a wand over a paper doesn't impart any magical properties.
Statistics are evil, no matter kind. Too easily snare those without thought, and/or ability to think. Lead to the albatross we live with now. Yes, experience is the better teacher, though by the time we gain it, we're near losing it.
The problems mentioned are quite standard and you can find them covered many places. Plus I would point out that the first uses frequentist priors and provides a frequentist result, it doesn't really capture the flavor of the Bayesian approach. I think Jaynes book might be a better introduction, especially the first couple of chapters for the non-mathematical. Unfortunately, it is no longer free online. There are many other good books these days, plus Jaynes papers can be a fun read. I don't agree with everything he says, he was a bit of a fanatic, but they are still interesting.
Here's a simpler example. Suppose some one person stole some money and there are a hundred possible suspects. You use a lie detector, which has a 99% chance of a positive if you are guilty, and a 99% chance of a negative if you are innocent. Someone tests positive. What are the chances the person is guilty?
You said: "In my field of mental illness, the data is often too squishy...." You are correct. Bayesian or not, predicting the probability of the outcome of disease, or more generally any probability associated with living systems, is complicated by the non-homogeneous nature of the factors understudy. The complexity of living systems confounds the statistician.
Therein lies a lesson that politicians and leaders usually fail to learn. Humans confound prediction and control because they adjust to conditions. If you raise taxes and expect to gain a specific amount of revenue the humans paying the taxes will change their actions to limit their liability and your revenue estimates are never met. Many people with mental health issues learn to overcome or at least compensate for their issues. This compensation effort is hindered by well intended attempts to alleviate their condition as in extra attention and rules that allow them to continue to function at their expected low levels. Have you ever seen a three legged dog? Isn't it amazing how the dog compensates for their disability? What would be the result if the owner did for the dog everything the dog should be doing for itself? The dog would not learn how to overcome the disability and eventually when the well intended assistance ended the dog may well die or suffer a serious adjustment period. Where is the middle ground? Where you are actually helping the person with a mental disability and not either making the problem worse or at the least preventing the adjustment/compensation period. That is the tough question. Do we make problems worse with our efforts to make them better?