Friday, March 11, 2016

Statisticians complain about p-values

Nature reports:
Misuse of the P value — a common test for judging the strength of scientific evidence — is contributing to the number of research findings that cannot be reproduced, the American Statistical Association (ASA) warns in a statement released today1. The group has taken the unusual step of issuing principles to guide use of the P value, which it says cannot determine whether a hypothesis is true or whether results are important.

This is the first time that the 177-year-old ASA has made explicit recommendations on such a foundational matter in statistics, says executive director Ron Wasserstein. The society’s members had become increasingly concerned that the P value was being misapplied in ways that cast doubt on statistics generally, he adds.

In its statement, the ASA advises researchers to avoid drawing scientific conclusions or making policy decisions based on P values alone. ...

The statement’s six principles, many of which address misconceptions and misuse of the p-value, are the following:

1. P-values can indicate how incompatible the data are with a specified statistical model.

2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.

4. Proper inference requires full reporting and transparency.

5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.

6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
In the medical and social sciences, p-values rule. Every paper cites them. The p-value determines whether the paper is publishable or not.

Paper do not get retracted for bogus use of p-values, but people raise a storm when the word "Creator" sneaks into a paper. It appears to be just a mistranslation, as the author intended "nature" or something similar.

2 comments:

  1. I think we can blame that on poor university educations. Find me a GOOD book on experimental design and comprehensive statistics. They just slop and leave everyone to stumble through old journal articles to uncover methodologies. If you don't teach, then don't be surprised people don't know anything. American universities were always overrated and especially their research, which contributes very little (about 3% of overall research). Most of this "research" isn't even important. When it comes to major medical or drug studies, then all kinds of scrutiny is made. Only academics even think this stuff even matters enough to double check. Profits and markets are the best vetting mechanism.

    The realm of psychology hasn't had much evidence for its methods regardless. Some of the original findings in the field showed that professionals had about as much effect as any amateur. They set the bar extremely low and play games with the statistics. These quacks are just labor cartels. Publishing some fancy papers doesn't change the fact they "research" is a make work scam to justify the skims.

    ReplyDelete
  2. Climate science is now chock full of p-values pretending to be evidence, so is HEP and every other governmentally dependent and politically driven realm of 'science'. Eisenhower really had this cancerous nonsense pegged decades ago.

    ReplyDelete