A few weeks ago I came across a TED talk given by Mona Chalabi titled, ‘3 ways to spot a bad statistic‘.
If you’re interested in science communication at all, I thoroughly recommend you watch it below:
Mona is a Data Journalist, and she’s seriously talented (after watching this talk I turned into a bit of a fan girl and watched lots more of her work – she also worked on the BBC3 documentary ‘Is Britain Racist?‘ which was brilliant too). Moving on from that fan-girl moment, the way Mona communicates data and statistics in a simple and easy to understand way shocked me a bit. The illustrations she’s done make things so easy to digest.
After a few weeks of thinking about this data doodle method of communicating information, I thought I’d have a go myself – the results of a few hours doodling and multiple scrappy drafts are below. All the data is from my PhD Supervisor’s Cochrane recruitment review – it was originally published in 2010, with an easier to digest version published in the BMJ in 2013. We’re currently in the process of updating that review (I’ll be a named author on the update which is pretty cool) , so when the update is out I might have a go at tweaking my doodles to see how additional data has changed things.
Information framing
Does framing information in different ways have an impact on the number of people who agree to take part in trials? Maybe, but the data we have isn’t great – the study we have has a very small sample size, but it might be something worth looking into with future research work.
Here, all of the information given to trial participants was truthful, just framed in slightly different ways. An example; the piece of information that participants needed was that 20 out of every 100 people experiences side-effects from the experimental drug and 80 people do not. A negatively framed version of that statement would be, ’20 people out of every 100 experience side-effects from the experimental drug’, a positively framed version would be, ’80 people out of every 100 don’t experience any side-effects at all from the experimental drug’. A neutral version would be the original statement which gives all the information.
You might think that positively framing information would increase the number of people who agree to take part in your trial, but that’s not the case here. Neutral framing – i.e. giving the potential participant all of the information required to make their own decision, with no sway in the framing of that information either way, actually results in more people saying yes. That said, each framing group only had 30 people in it.
Telephone reminders
When potential participants are sent letters of invitation to trials, a lot of the time they end up in the bin, on top of the fridge, underneath that huge pile of life admin that’s been left on your desk for the past few months. If you give those potential participants a call and remind them that you sent them a letter, they’re more likely to then take part in the trial. Not rocket science is it? Nevertheless, we have some evidence that more people accept when they’ve had a telephone reminder. This is good because recruitment is difficult and if you only manage to boost your numbers by a few percent, you’ll happily take a few percent.
Placebos versus active comparators
If your trial treatment/drug/surgical technique/whatever, is being tested against a placebo, people are less likely to agree to take part. That isn’t a massive shock but it’s nice to have data to back it up. In general, people would rather that there was an active comparator; I guess you can think about it like this – you can bet £5 to get $50 back or lose your money, or you can bet £5 to get £50 or get your initial contribution back – getting the placebo is viewed as a loss rather than the lack of a gain.
What do you think about this type of science communication? Is it something that you might have a go at with your own research work? I really enjoyed doing it and I think it was a nice way for me to get my head around the data too – something different to work on.
One thought on “Visualising Statistics to Avoid Misinterpretation”