Monday, August 15, 2016

Confusion about p-values writes:
To be clear, everyone I spoke with at METRICS could tell me the technical definition of a p-value — the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct — but almost no one could translate that into something easy to understand.

It’s not their fault, said Steven Goodman, co-director of METRICS. Even after spending his “entire career” thinking about p-values, he said he could tell me the definition, “but I cannot tell you what it means, and almost nobody can.” Scientists regularly get it wrong, and so do most textbooks, he said.
Statistician A. Gelman tries to explain p-values here and here.

The simple explanation of p-value is that if it is less than 5%, then your results are publishable.

So you do your study, collect your data, and wonder about whether it is statistically significant evidence for whatever you want to prove. If so, the journal editors will publish it. For whatever historical reasons, they have agreed that p-value is the best single number for making that determination.

1 comment: