Healthcare’s Dirty Little Secret: Results From Many Clinical Trials Remain Unreliable

I wrote this article along with my PhD Supervisors, Prof Shaun Treweek and Dr Katie Gillies at the University of Aberdeen. We originally published this work on The Conversation in October 2016, and I’ve republished it here under Creative Commons licence 4.0 as I think it gives a good background to the topics and issues that my PhD is based on.


Clinical trials have been the gold standard of scientific testing ever since the Scottish naval surgeon Dr James Lind conducted the first while trying to conquer scurvy in 1747. They attract tens of billions of dollars of annual investment and researchers have published almost a million trials to date according to the most complete register, with 25,000 more each year.

Clinical trials break down into two categories: trials to ensure a treatment is fit for human use and trials to compare different existing treatments to find the most effective. The first category is funded by medical companies and mainly happens in private laboratories.

The second category is at least as important, routinely informing decisions by governments, healthcare providers and patients everywhere. It tends to take place in universities. The outlay is smaller, but hardly pocket change. For example, the National Institute of Health Research, which coordinates and funds NHS research in England, spent £74m on trials in 2014/15 alone.

Yet there is a big problem with these publicly funded trials that few will be aware of: a substantial number, perhaps almost half, produce results that are statistically uncertain. If that sounds shocking, it should do. A large amount of information about the effectiveness of treatments could be incorrect. How can this be right and what are we doing about it?

The participation problem

Clinical trials examine the effects of a drug or treatment on a suitable sample of people over an appropriate time. These effects are compared with a second set of people – the “control group” – which thinks it is receiving the same treatment but is usually taking a placebo or alternative treatment. Participants are assigned to groups at random, hence we talk about randomised controlled trials.

If there are too few participants in a trial, researchers may not be able to declare a result with certainty even if a difference is detected. Before a trial begins, it is their job to calculate the appropriate sample size using data on the minimum clinically important difference and the variation on the outcome being measured in the population being studied. They publish this along with the trial results to enable any statisticians to check their calculations.

Early-stage trials have fewer recruitment problems. Very early studies involve animals and later stages pay people well to take part and don’t need large numbers. For trials into the effectiveness of treatments, it’s more difficult both to recruit and retain people. You need many more of them and they usually have to commit to longer periods. It would be a bad use of public money to pay so many people large sums, not to mention the ethical questions around coercion.

To give one example, the Add-Aspirin trial was launched earlier this year in the UK to investigate whether aspirin can stop certain common cancers from returning after treatment. It is seeking 11,000 patients from the UK and India. Supposing it only recruits 8,000, the findings might end up being wrong. The trouble is that some of these studies are still treated as definitive despite there being too few participants to be that certain.

Image credit: wavebreakmedia
Image credit: wavebreakmedia

One large study looked at trials between 1994 and 2002 funded by two of the UK’s largest funding bodies and found that fewer than a third (31%) recruited the numbers they were seeking. Slightly over half (53%) were given an extension of time or money but still 80% never hit their target. In a follow-up of the same two funders’ activities between 2002 and 2008, 55% of the trials recruited to target. The remainder were given extensions but recruitment remained inadequate for about half.

The improvement between these studies is probably due to the UK’s Clinical Trials Units and research networks, which were introduced to improve overall trial quality by providing expertise. Even so, almost half of UK trials still appear to struggle with recruitment. Worse, the UK is a world leader in trial expertise. Elsewhere the chances of finding trial teams not following best practice are much higher.

The way forward

There is remarkably little evidence about how to do recruitment well. The only practical intervention with compelling evidence of benefit is from a forthcoming paper that shows that telephoning people who don’t respond to postal invitations, which leads to about a 6% increase in recruitment.

A couple of other interventions work but have substantial downsides, such as letting recruits know whether they’re in the control group or the main test group. Since this means dispensing with the whole idea of blind testing, a cornerstone of most clinical trials, it is arguably not worth it.

Many researchers believe the solution is to embed recruitment studies into trials to improve how we identify, approach and discuss participation with people. But with funding bodies already stretched, they focus on funding projects whose results could quickly be integrated into clinical care. Studying recruitment methodology may have huge potential but is one step removed from clinical care, so doesn’t fall into that category.

Others are working on projects to share evidence about how to recruit more effectively with trial teams more widely. For example, we are working with colleagues in Ireland and elsewhere to link research into what causes recruitment problems to new interventions designed to help.

Meanwhile, a team at the University of Bristol has developed an approach that turned recruitment completely around in some trials by basically talking to research teams to figure out potential problems. This is extremely promising but would require a sea change in researcher practice to improve results across the board.

And here we hit the underlying problem: solving recruitment doesn’t seem to be a high priority in policy terms. The UK is at the vanguard but it is slow progress. We would probably do more to improve health by funding no new treatment evaluations for a year and putting all the funding into methods research instead. Until we get to grips with this problem, we can’t be confident about much of the data that researchers are giving us. The sooner that moves to the top of the agenda, the better.

Hello 2017: Setting New Goals

aurelia_clock_copper_lb01_22016 was a weird one; personally it was a bit of a car-crash, but career-wise I’d deem it a success. I like the process of turning over a new leaf and reflecting on the past year – not necessarily with the whole ‘new year, new me’ in mind, but I do think it’s a good excuse to take a look at recent successes and lessons to learn for the year ahead. Time is also ticking with regards to the PhD, so it seems as good a time as any to get back into work recharged and armed with new goals.

Begin piecing together the thesis
Throughout the first year of the PhD I wrote frequently; I wrote a full ‘PhD protocol’ safe in the knowledge that it would never be published purely so that I had the timelines and task ahead of me worked out early on in the process. Into the second year I began abstract and full text screening for my systematic review, moving on to data extraction over the summer of last year. Other projects started up and required things other than writing, so it’s time for me to get back to writing more often. Whether the words I write end up in the thesis is not important; writing will help me to focus the project and ultimately the thesis later on.
I’ve lost count of the amount of times people have told me to ‘start writing early’ or ‘don’t leave it to the last minute’. I have started, but there are sections that I can get on with writing at the moment.

Read more widely, and more frequently
As I mentioned earlier on, I’ve been side-lined with data extraction and other projects, and my reading has definitely slipped. I need to get back into the literature, and as I was scrolling through Twitter last week I saw someone using #365papers. 365 papers is a project that I think was started by Jacqueline Gill and Meghan Duffy as a new year’s resolution for 2016. In basic terms it involves reading a paper every day (on average) for a year. I’m going to give this a go, and I’ll be blogging about the project periodically throughout the year too. Stay tuned for updates and wish me luck!

Seek out opportunities to publish

I’ve spoken to lots of people recently – both academics and people outside of academia – about the need to publish. Is it better to publish fewer, more focused papers, or more papers covering a broader range of topics? Every academic went with the latter. As an early career researcher I need to be publishing regularly, and the range of topics those papers include doesn’t seem to matter too much. I published my first PhD-related paper in 2016 (you can read it here), and I have a number of papers that should be published late in 2017 and into 2018. It’s time to actively look for more opportunities to publish though. I want to come out of this PhD feeling confident that I can go into a career in Health Services Research; applying for post-docs and fellowships with a decent list of publications behind me can only be a good thing.

Here’s to a happy, healthy and productive 2017!