I’ve recently written about science literacy, which is a major problem in Western society (at least) and has been written about a lot. What has barely been discussed is the issue of statistics literacy: in general, the public’s understanding of uncertainty and probability is disturbingly poor.
The classic example is weather. We’ve all complained about the accuracy of the weather forecasts, but what many people don’t realise is that the Australian Bureau of Meteorology gives the probability of rainfall and a range of precipitation. The TV weatherman often presents the story without these, leading us all to expect rain when they say “Showers”. There is always uncertainty involved in science, but most people don’t seem to understand that.
The recent conviction of six scientists and a government official for failing to adequately assess and communicate the risks of possible damaging earthquakes in the Italian town of L’Aquila is another recent case in point. The story has sent the scientific community into an uproar, in part because the verdict has been incorrectly reported as convicting the seismologists of failing to predict the earthquake that subsequently occurred. While local officials may have wanted to play down the risks of danger, the seismologists simply stated that they could not be confident there would be an earthquake after a series of tremors in the preceding weeks. The court interpreted that, and the subsequent statement from local officials that there would not be an earthquake, as one and the same thing.
The whole sorry affair suggests that while we’ve been worried about science literacy for some time, misunderstandings about uncertainty, probability and statistics are perhaps of far greater concern. Statistics and science are intrinsically linked, however you cannot properly understand the latter without understanding the former. I don’t mean we should be teaching everyone statistical tests here; I’m talking about probability and uncertainty when it comes to any measurement. This is important for more than just science.
Any measurement comes with an uncertainty: you cannot measure anything in the physical world exactly. Uncertainties lead to the significance of a result, since if your uncertainties are too high, your result and interpretation cannot be very significant. This is where probability comes in. If a result or measurement is 95% significant, it means there is a 5% chance that it is incorrect. That’s still actually pretty high! Scientists often try to achieve greater than 99% significance before claiming a conclusive result (although sadly not always).
Any study or survey – scientific or not – will also have uncertainty associated with it, sometimes referred to as the margin of error. Sample size is a key component of this, but is often overlooked. Political opinion polls are great example: they’re always reported without regard to the margin of error. It’s typically about 3% for a sample size of about 1000, which is the standard. That means there’s no statistical difference between a 50-50 poll and a 52-48 poll, yet pundits will always claim there is, often based on one poll.
Statistics and probability is often boring at school, so most people pay little attention to it. I learned most of the statistics I know as a PhD student because I had to, not really because I wanted to. I’m talking in this post about interpreting statistical results, not deriving them.
Understanding how to derive statistics is often seen has hard or boring, and it can be both. Understanding how to interpret statistics is straightforward, and not only that is vitally important, in more than just science.