Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
If you enjoyed these post, make sure you subscribe to my Email Updates

Friday, April 19, 2019

Statisticians want to abandon science’s standard measure of ‘significance’ | Science & Society - Science News

Here’s why “statistically significant” shouldn’t be a stamp of scientific approval, according to Bethany Brookshire, science writer with Science News magazine and Science News for Students.

The concept of “statistical significance” has become scientific shorthand for a finding’s worth. What might science look like without it?
Photo: nicolas_/iStock /Getty Images Plus
In science, the success of an experiment is often determined by a measure called “statistical significance.” A result is considered to be “significant” if the difference observed in the experiment between groups (of people, plants, animals and so on) would be very unlikely if no difference actually exists. The common cutoff for “very unlikely” is that you’d see a difference as big or bigger only 5 percent of the time if it wasn’t really there — a cutoff that might seem, at first blush, very strict.

It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure. Achieving an experimental result with statistical significance often determines if a scientist’s paper gets published or if further research gets funded. That makes the measure far too important in deciding research priorities, statisticians say, and so it’s time to throw it in the trash.

More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature. An accompanying March 20 special issue of the American Statistician makes the manifesto crystal clear in its introduction: “‘statistically significant’ — don’t say it and don’t use it.”...
 
What’s the problem with statistical significance? 
But science and statistics have never been so simple as to cater to convenient cutoffs. A P value, no matter how small, is just a probability. It doesn’t mean an experiment worked. And it doesn’t tell you if the difference in results between experimental groups is big or small. In fact, it doesn’t even say whether the difference is meaningful.

The 0.05 cutoff has become shorthand for scientific quality, says Blake McShane, one of the authors on the Nature commentary and a statistician at Northwestern University in Evanston, Ill. “First you show me your P less than 0.05, and then I will go and think about the data quality and study design,” he says. “But you better have that [P less than 0.05] first.”
Read more...

Source: Science News