Tuesday, August 8, 2017

The scourge of p-values

Statistician Andrew Gelman posts this excerpt from a recent paper in JAMA, one of the top medical journals:
Nineteen of 203 patients treated with statins and 10 of 217 patients treated with placebo met the study definition of myalgia (9.4% vs 4.6%. P = .054). This finding did not reach statistical sig­nificance, but it indicates a 94.6% prob­ability that statins were responsible for the symptoms.
This is statistical nonsense, and shows that the JAMA editors do not understand p-values. A comment responds:
does anyone have a good, brief, layperson-accessible reference on correct (or at least skillful) interpretation of p-values?

No, this doesn’t exist and probably cannot exist at this point. So many misunderstandings need to be unraveled (and each person probably needs a personalized explanations) that it will take much longer.
Gelman regularly attacks misuse of p-values, but even he got caught explaining them wrong.

The situation appears hopeless. The p-value mess hit the fan 5 or 10 years ago when John Ioannidis showed that most published research is wrong, Daryl Bem showed that standard p-value experiments show that ppl have psychic powers, and studies showed that most medical and psychological research fails to replicate.

In spite of all this, p-values are used as much as ever, and our top journal editors continue to misunderstand them. Our top statisticians cannot even point to a good layman's tutorial.

1 comment:

  1. Are you suggesting that p-values are bad because Bem's experiments showed that people may have psychic ability? Bem apparently does believe that people do indeed have psychic ability.