Another late post, but I’m super excited for this one.
Today I spoke to the wonderful Mighty Casey Quinlan – she’s a hugely engaging podcaster and comedian who interviewed me in the most warm and enthusiastic way. We talked all things trials, public engagement, and patient involvement.
Popular science books are one of my favourite things to read – probably because they give me a break from reading research papers which are often dry and difficult to follow, but they still give me the feeling of productivity. This category of scientific research in an easily-readable form that allows non-scientists to get to grips with complex concepts is growing, and rightly so. These books can leave patients feeling more informed, and therefore empowered to think critically about the types of treatment they receive. So, in no particular order – here are the top 5 books I think everyone should read.
I Think You’ll Find it’s a Bit More Complicated Than That by Ben Goldacre
What’s it about? Ben Goldacre’s Bad Science was the first popular science book I ever read, it was a great introduction to critical thinking, and a book that I think should be given to every school-age child. This more recent book is a bit different in that it’s a collection of Goldacre’s published journalism. For those of us who didn’t follow his various columns and articles, this is such a good book! It covers a huge variety of different topics and as it was all originally written for public audiences (i.e. not scientists), it’s super easy to understand even the most complicated of concepts.
Trick or Treatment: Alternative Medicine on Trial by Simon Singh and Edzard Ernst
What’s it about? This book was on the reading list for one of my first year undergraduate courses, and as someone who had never used or believed in the routine use of alternative medicine, I didn’t really see the need to read it. I soldiered on regardless, and actually really enjoyed this book. I had never previously thought of alternative medicines causing harm, I just thought they were largely ineffective and therefore pretty pointless. The pairing of Singh and Ernst is a really good one; easy to follow, really well written and packed with evidence for and against the use of alternative therapies. I wish more people would read this.
Testing Treatments: Better Research for Better Healthcare by Imogen Evans, Hazel Thornton, Iain Chalmers and Paul Glasziou
What’s it about? My PhD Supervisor gave me this book on my first day – I think as a more easily digestible read than the pile of research papers that came along with it. This book is brilliant. It covers a diverse range of treatments and examples, from mastectomy to thalidomide, and explores the prospect that even though randomised controlled trials are the so-called ‘gold standard’, even they can be done badly. I think this is a really good resource for researchers to look to when communicating their work, but it’s perhaps even more important for patients to read it in order to make informed decisions about their own treatment.
The Patient Paradox: Why Sexed Up Medicine is Bad for Your Health by Margaret McCartney
What’s it about? It took me a while to get through this, not because it was a difficult or dry read, but because I found myself getting really frustrated each time I read it. I knew that the way our health services select which treatment to fund or which screening test to implement was not perfect, but this provided me with an overwhelming volume of evidence to suggest the problem was even worse than I thought. In very basic terms, too much testing of well people and not enough care for the sick worsens health inequalities and drains professionalism.
Being Mortal: Illness, Medicine and What Matters in the End by Atul Gawande
What’s it about? This book tackles the thing that we never want to think about; no matter how good medicine is, we will all die at some point. Atul Gawande here talks about this in a really positive way; he provides insight and research into the use of medicine not only to improve quality of life, but to improve quality of death too. He offers examples of freer, more socially fulfilling models for assisting the infirm and dependent elderly, and he explores the varieties of hospice care to demonstrate that a person’s last weeks or months may be rich and dignified.
I wrote this article along with my PhD Supervisors, Prof Shaun Treweek and Dr Katie Gillies at the University of Aberdeen. We originally published this work on The Conversation in October 2016, and I’ve republished it here under Creative Commons licence 4.0 as I think it gives a good background to the topics and issues that my PhD is based on.
Clinical trials have been the gold standard of scientific testing ever since the Scottish naval surgeon Dr James Lind conducted the first while trying to conquer scurvy in 1747. They attract tens of billions of dollars of annual investment and researchers have published almost a million trials to date according to the most complete register, with 25,000 more each year.
Clinical trials break down into two categories: trials to ensure a treatment is fit for human use and trials to compare different existing treatments to find the most effective. The first category is funded by medical companies and mainly happens in private laboratories.
The second category is at least as important, routinely informing decisions by governments, healthcare providers and patients everywhere. It tends to take place in universities. The outlay is smaller, but hardly pocket change. For example, the National Institute of Health Research, which coordinates and funds NHS research in England, spent £74m on trials in 2014/15 alone.
Yet there is a big problem with these publicly funded trials that few will be aware of: a substantial number, perhaps almost half, produce results that are statistically uncertain. If that sounds shocking, it should do. A large amount of information about the effectiveness of treatments could be incorrect. How can this be right and what are we doing about it?
The participation problem
Clinical trials examine the effects of a drug or treatment on a suitable sample of people over an appropriate time. These effects are compared with a second set of people – the “control group” – which thinks it is receiving the same treatment but is usually taking a placebo or alternative treatment. Participants are assigned to groups at random, hence we talk about randomised controlled trials.
If there are too few participants in a trial, researchers may not be able to declare a result with certainty even if a difference is detected. Before a trial begins, it is their job to calculate the appropriate sample size using data on the minimum clinically important difference and the variation on the outcome being measured in the population being studied. They publish this along with the trial results to enable any statisticians to check their calculations.
Early-stage trials have fewer recruitment problems. Very early studies involve animals and later stages pay people well to take part and don’t need large numbers. For trials into the effectiveness of treatments, it’s more difficult both to recruit and retain people. You need many more of them and they usually have to commit to longer periods. It would be a bad use of public money to pay so many people large sums, not to mention the ethical questions around coercion.
To give one example, the Add-Aspirin trial was launched earlier this year in the UK to investigate whether aspirin can stop certain common cancers from returning after treatment. It is seeking 11,000 patients from the UK and India. Supposing it only recruits 8,000, the findings might end up being wrong. The trouble is that some of these studies are still treated as definitive despite there being too few participants to be that certain.
Image credit: wavebreakmedia
One large study looked at trials between 1994 and 2002 funded by two of the UK’s largest funding bodies and found that fewer than a third (31%) recruited the numbers they were seeking. Slightly over half (53%) were given an extension of time or money but still 80% never hit their target. In a follow-up of the same two funders’ activities between 2002 and 2008, 55% of the trials recruited to target. The remainder were given extensions but recruitment remained inadequate for about half.
The improvement between these studies is probably due to the UK’s Clinical Trials Units and research networks, which were introduced to improve overall trial quality by providing expertise. Even so, almost half of UK trials still appear to struggle with recruitment. Worse, the UK is a world leader in trial expertise. Elsewhere the chances of finding trial teams not following best practice are much higher.
The way forward
There is remarkably little evidence about how to do recruitment well. The only practical intervention with compelling evidence of benefit is from a forthcoming paper that shows that telephoning people who don’t respond to postal invitations, which leads to about a 6% increase in recruitment.
A couple of other interventions work but have substantial downsides, such as letting recruits know whether they’re in the control group or the main test group. Since this means dispensing with the whole idea of blind testing, a cornerstone of most clinical trials, it is arguably not worth it.
Many researchers believe the solution is to embed recruitment studies into trials to improve how we identify, approach and discuss participation with people. But with funding bodies already stretched, they focus on funding projects whose results could quickly be integrated into clinical care. Studying recruitment methodology may have huge potential but is one step removed from clinical care, so doesn’t fall into that category.
Others are working on projects to share evidence about how to recruit more effectively with trial teams more widely. For example, we are working with colleagues in Ireland and elsewhere to link research into what causes recruitment problems to new interventions designed to help.
Meanwhile, a team at the University of Bristol has developed an approach that turned recruitment completely around in some trials by basically talking to research teams to figure out potential problems. This is extremely promising but would require a sea change in researcher practice to improve results across the board.
And here we hit the underlying problem: solving recruitment doesn’t seem to be a high priority in policy terms. The UK is at the vanguard but it is slow progress. We would probably do more to improve health by funding no new treatment evaluations for a year and putting all the funding into methods research instead. Until we get to grips with this problem, we can’t be confident about much of the data that researchers are giving us. The sooner that moves to the top of the agenda, the better.