Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Tuesday, April 16, 2019

Scientists, Data Scientists And Significance by Mike James | iProgrammer

In a recent special issue of The American Statisticianscientists are urged to stop using the term "statistically significant". So what should we be using? Is this just ignorance triumphing over good practice?

There are many who think that science is in a state of crisis of irreproducible, and even fraudulent, results. It is easy to point the finger at the recipe that statisticians have given us for "proving" that something is so. It is a bit of a surprise to discover that at least 43 statisticians (the number of papers in the special edition) are pointing the finger at themselves! However, it would be a mistake to think that statisticians are one happy family. There are the Frequentists and the Bayesians, to name but two warring factions.

The problem really is that many statisticians are doubtful about what probability actually is. Many of them don't reason about probability any better than the average scientist and the average scientist is often lost and confused by the whole deal.

If you are Frequentist then probability is, in principle, a measurable thing. If you want to know the probability that a coin will fall heads then you can toss it 10 times and get a rough answer, toss it 100 times and get a better answer, 1000 and get even better and so on...

A much bigger problem is the repeated experiment situation. If you are using experiments that have a significance of 5%, then if you repeat the experiment 100 times you will expect to see five significant results purely by chance. I once was asked why in a ten by ten correlation matrix there were always a handful of good significant correlations. When I explained why this was always the case, I was told that the researcher was going to forget what he had just discovered and I was never to repeat it. Yes measuring lots of things and being surprised at a handful of significant results is an important experimental tool. If repeated attempts at finding something significant were replaced by something more reliable, the number of papers in many subjects would drop to a trickle. This is a prime cause of the irreproducibility of results and a repeat generally finds the same number of significant results, just a different set.

Source: iProgrammer