The observation that people hear what they want to hear is not a new one, and yet the extent to which people will go out of their way to ignore or misinterpret evidence can still surprise.
A recent study showed that chocolate helped with weight loss. And to be clear, the experiment as set out did show that, but the trial numbers were so low that the data was useless from a scientific standpoint, and the credulous/lazy reporters that lapped up the story either didn’t care to or didn’t know how to dismantle the claims made. A basic knowledge of how statistics works would show that the sample numbers were too small to be significant.
The study is a credible at-a-glance honeypot of just enough data to hang a publicity-attracting story off, the equivalent of that iffy report that deflects attention at work for the international press.
Phil Factor’s recent keynote for SQL Saturday Exeter (and if you haven’t seen it, you really should) focussed on the errors in understanding that come from bad data, filtering into common knowledge and becoming accepted as the unvarnished truth. Equally importantly though, even if your data is perfect, it’s still quite possible to get false or misleading results from it through ignorance or an outright dismissal of what’s actually there.
It’s ugly that simply “having data” grants enough borrowed authority to anyone careful enough about how they present their supposed findings. At an individual level, this is a hard fix. Properly understanding statistics to even a basic level is more opaque than learning arithmetic. So many jobs require a proper understanding of what (if anything) our data tells us, and yet it seems like an afterthought. After all, if the person putting together the science reporting in your daily paper isn’t required to know a trend from junk data, why should your colleagues?
At least, as Steve has pointed out, there are ways to educate.