I originally wrote this post as a guest feature on ‘An Anxious Scientist‘. The piece was originally published at the beginning of August, and I’ve republished it here with permission from Rebecca who runs An Anxious Scientist. Make sure you take a look at her blog for brilliant posts explaining complex science concepts in engaging ways, showcasing scientists in all fields, and of course some of Rebecca’s own PhD experiences too.
Public engagement with science is not a new concept, but with the rise in social media usage and pressure on scientists to prove the impact of their work, the world of science communication is advancing at a rapid rate. Many early career researchers now contribute to online blogs, Instagram and Twitter profiles with the aim of disseminating their research, breaking down stereotypes, and ultimately getting the public excited about science. The opportunities that science communication opens up for both academics and public audiences is huge. It’s difficult to see a downside; academics work to improve the way they communicate, and the public finds out more about the research that’s going on around, or in some cases with them.
The diversity of fields covered by science communicators is vast; but is there room for everyone?
I’ll say up front that I think good quality science communication from any field of research is a good thing; but as a clinical trials methodologist, clinical trials in the public sphere of scientific knowledge hold a different level of importance for me. That’s not to say that other types of science are not important, just that trials are a topic I really feel the public could benefit from knowing about.
My work focusses on improving the way we do clinical trials – in particular, how we recruit participants into clinical trials in an efficient way. Efficiency here could mean lots of things; cheaper, faster, less patient burden, less administrative work, etc – I’m interested in making the process better, whatever ‘better’ means.
Each trial has statisticians that process the huge amount of data that comes from trials, but way before results start coming in, these statisticians are charged with the task of calculating how many people need to take part in the trial for the results to be robust. This is important because if trials recruit too few participants then the results of those trials could actually be showing us unreliable data. Estimates currently show that ~45% of trials globally don’t recruit enough people.
Clinical trials are the types of studies that we want our healthcare system to be based on. Trials are able to differentiate between an intervention causing an outcome, and an intervention being correlated with an outcome. In simple terms, they can answer questions like ‘does taking a paracetamol get rid of my headache, or would my headache have disappeared without it?’
Understanding the strengths and limitations of trials, and being able to unravel what features differentiate a reliable trial from an unreliable one, would empower the public.
Take the example of the Alzheimer’s drug LTMX that caused these headlines in July 2016:
With those headlines in mind, take a look at these articles that are about that exact same drug, LMTX:
In this case, newspapers with high readership figures and easy access to the public told of a drug that would halt Alzheimer’s disease – and the public could be forgiven for thinking that the problem of Alzheimer’s was now solved. Scientific media, and news outlets with smaller readerships provided a more balanced view of the trial that tested LMTX.
Surely this means newspapers should be reporting better, rather than putting the onus on the public?
News outlets like The Sun, The Daily Mail and The Times are not scientific experts; their reporting on health research could be discussed in another article entirely! What I do think is important, is that the public feel equipped to critique these sensationalised pieces in order to get to the root of the story – the facts.
All of the articles state that 891 people were enrolled in the trial, the majority were also taking treatments that have already been approved to help relieve Alzheimer’s symptoms. 15% (144) of the 891 people were only taking the trial drug (LMTX), or a placebo. In this group the researchers noticed a difference. All of the articles provide that information – it’s the headline that is swaying the public’s thoughts on the results.
Given what I mentioned earlier about the importance of recruiting the correct number of participants, the results of this work are immediately put in doubt. If the trial’s statisticians calculated that 891 people were needed to find a clinical difference between patients taking the experimental drug and those taking other drugs, then why does it matter that a difference was found in a group of 144 patients? Put bluntly, it doesn’t. These trial results do not offer a definitive answer to the question of whether LMTX could prevent cognitive decline in Alzheimer’s patients.
As we can’t control what headlines are plastered over the front page, it’s important that we empower, educate, and answer questions from the public about trials so that they can make these judgements themselves.
So, what’s the solution? Whilst the science communication world advances, I feel like we are focussing too much on the discoveries themselves, over the methods we use to discover. The addition of a level of transparency and openness about the flaws in scientific methods would go further to empower the public. It would begin to break down barriers years of science has built between scientists and the public – science may have the answers, but we need to be open and honest about the methods we use to get those answers.
If you’re a science communicator, why not challenge yourself to explain the limitations of your work rather than simply strengths?