The problem really is that many statisticians are doubtful about what probability actually is. Many of them don't reason about probability any better than the average scientist and the average scientist is often lost and confused by the whole deal.
If you are Frequentist then probability is, in principle, a measurable thing. If you want to know the probability that a coin will fall heads then you can toss it 10 times and get a rough answer, toss it 100 times and get a better answer, 1000 and get even better and so on...
A much bigger problem is the repeated experiment situation. If you are using experiments that have a significance of 5%, then if you repeat the experiment 100 times you will expect to see five significant results purely by chance. I once was asked why in a ten by ten correlation matrix there were always a handful of good significant correlations. When I explained why this was always the case, I was told that the researcher was going to forget what he had just discovered and I was never to repeat it. Yes measuring lots of things and being surprised at a handful of significant results is an important experimental tool. If repeated attempts at finding something significant were replaced by something more reliable, the number of papers in many subjects would drop to a trickle. This is a prime cause of the irreproducibility of results and a repeat generally finds the same number of significant results, just a different set.
Read more...
Source: iProgrammer