A recent post on the Simply Statistics blog takes on a sort-of-hot topic in statistics: what errors actually matter, and how are they best quantified and reported when you are using statistics to infer something about a population. Best, in this case, means best at making accurate predictions. The two camps are the Frequentists and the Bayesians. (I gather from reading a bit that the debate had actually settled down until Nate Silver brought it up in his book The Signal and the Noise.) Note – the disagreement is not about descriptive statistics, it is about inferential statistics, so don’t worry if you are committed to box plots, frequency distributions, and/or mean and standard deviation; they are very good for describing data. The two approaches differ in what you are comparing your results to. All interpretations are comparisons, implicit or explicitly, so what you compare your results to matters. In one camp, you have people comparing their measurements to the null hypothesis which is that variation among your measurements arose due to random, natural variation in the measurand (the thing that is measured). The other camp includes in its comparisons previous measurements of the measurand. It does this by including consideration of what they call “priors.” My “I’m-not-a-statistician-but-I-know-what-I-like” point of view on this is that each is good for particular things, which is why both approaches continue to be used. For example, if you can not find a relevant prior, you can’t take it into account, so you have to compare your result to the null hypothesis. As with many things, for example choosing effect size, your educated judgment has to inform the design of your analysis and thus the design of your experiments.

## Discussion

## No comments yet.