Are Clinical Trials a Waste of Time?

I wrote this article for the 23rd issue of Lateral Magazine. The piece was originally published at the beginning of the month, and I’ve republished it here under Creative Commons licence 4.0. Hope you enjoy!


Changing how clinical trials are designed and reported could save billions of dollars.

Every year, we spend $200 billion globally on health and medical research, more than the annual GDP of New Zealand. Yet up to 85% of this money is wasted on research that asks the wrong questions, is badly designed, not published or poorly reported. In addition, a 2005 study by John Ioannadis showed that claimed research findings are more likely to be false than true — that is, they will be proven incorrect when better quality research is conducted later down the line.

So is clinical research a waste of time, and therefore money? As a researcher myself, I’m inclined, as you might expect, to say no. Let me explain why clinical trials are so expensive, and how we can make these expenses count.

Clinical trials are a necessary step for approving new medical treatments. Hal Gatewood/Unsplash (CC0 1.0)

Clinical trials are affectionately termed the ‘gold standard’ method of evaluation in a healthcare setting, and were necessary for marketing approval for everything from the paracetamol you take to ease your hangover to treatments for cancer and Alzheimer’s disease. But they also require a huge amount of resources. Trials can take years to complete and often involve thousands of people from various countries to ensure that research questions are answered satisfactorily.At the core of high-quality medical research are randomised controlled trials. In these trials, participants are randomly allocated to one of two or more treatment groups (referred to as arms) that the trial is looking at. Most people think of trials involving drugs, but interventions might also include surgical procedures, medical devices, and lifestyle interventions such as exercise or diet modification.  The randomisation of participants ensures that any outside biases, such as sex, age, or educational status, are distributed throughout the treatment groups, effectively negating the bias these outside influences may have.

Randomised trials must also be ‘controlled’; that is, one of the treatment arms acts as a control group to which the treatments are compared. In most cases, this control group will be given the standard treatment option for their condition or disease. This allows us to see if the new treatment we’re testing is better than what is already available to patients.

Clinical trials are the ‘gold standard’ of evaluating healthcare outcomes. Sanofi Pasteur/Flickr (CC BY-NC-ND 2.0)

In a recent study, researchers looked at trials funded by Australia’s National Health and Medical Research Council between 2008 and 2010. These 77 studies required a total of A$59 million in public funding. Most people would consider this an acceptable price to pay for improved survival rates, but what if most of that $59 million was wasted due to correctable problems?The estimate that 85% of all health research is being avoidably “wasted” is shocking. As an optimist I’m looking to the ‘avoidably’ part of that sentence; we have a lot of work to do, but it’s all work ready to be done, rather than issues we hope to solve at some point in the distant future.

The problem of research waste has been a central focus of the health services research and evidence-based medicine communities since the publication of Ioannidis’ paper “Why most published research findings are false”, and there is a clear push to prevent research being wasted.

As a PhD student in the Health Services Research Unit at the University of Aberdeen, I am working to improve the efficiency of trials. There is a bizarre contradiction in the trials world; we do trials to generate good quality evidence, but the way we carry out certain aspects of trials is not remotely evidence-based.

Here’s an example. Recruiting participants for trials is a notoriously difficult process that wastes time, effort and money, but there is limited evidence that the methods we currently use to improve recruitment are particularly efficient. For example, many trial teams approach patients via existing healthcare infrastructure, but these systems are already overstretched without the addition of research tasks, and it may be that there’s a better way to find patients without the need to involve physicians. If recruitment fails to successfully reach the trial’s target, the results of the trial as a whole can be at risk.

Many countries have introduced publicly accessible websites that allow people to search for trials currently in the process of recruiting. Patients can find trials that are relevant to their disease state, meaning the healthcare system does not need to be directly involved with recruitment. As yet we don’t have evidence to support or refute the effectiveness of these websites, so they are often used in conjunction with other recruitment strategies.

Finding suitable subjects for clinical trials is an inefficient process, but there are avenues from improvement. Queen’s University/Flickr (CC BY-NC-ND 2.)

Other research groups are working to alleviate research waste by tackling poor reporting of experimental methods. “Most of us have probably tried to recreate a meal we enjoyed in a restaurant,” wrote epidemiologist Tammy Hoffmann in a recent article. “But would you attempt it without a recipe? And if you have to guess most of the ingredients, how confident would you be about the end result?”

It makes sense; for health research to be picked up and implemented in a clinical setting, we need to give clinicians the full recipe. Interventions used in trials might involve drugs or non-drug treatments like exercise, psychosocial or dietary advice, and giving partial details is a sure-fire way to ensure research doesn’t make its way to patients. Crucial details, such as the materials needed to carry out interventions, are lacking in up to 60% of trials of non-drug interventions, and the problem occurs in drug studies, too. These articles focus on published trial reports, and don’t discriminate against public- or industry-funded trials; full recipes are lacking across both of these research areas.

Research is an imperfect process, and with research funds getting increasingly scarce worldwide, it’s important that we make a concerted effort to reduce the intrinsic inefficiency of trials. At the very minimum, we must work to ensure trial results are published in a timely manner.

On a wider, and perhaps more optimistic scale, it’s clear that researchers need to take responsibility for disseminating results of the projects they are involved in. It’s no longer acceptable for results to be presented only at specialist conferences that few clinicians are privy to. Trials are conducted with the explicit aim of improving human health, and it’s down to both researchers to ensure results are circulated and the public to hold researchers accountable.

Edited by Andrew Katsis and Sara Paradowski

When Was the First Clinical Trial?

As you’ve probably (hopefully!) picked up from other posts on this blog, my research is centred around clinical trials and their methodology. Trials can be intimidating for people that don’t know a whole lot about them, and as I’ve mentioned before, the ‘guinea pig‘ concept seems to haunt trial participation.

In this series of posts I want to answer any questions people have – from the basic to the obscure and everything in between – and demystify clinical trials. I asked a few friends who don’t work in a trials environment what they don’t know about trials, and the obvious starting point was ‘when was the first clinical trial?’, so here we are. Read on to find out when and how and first clinical trial came about.

Some sources say the first clinical trial was conducted in 605-562 BC, as outlined in the Old Testament’s Book of Daniel. Put simply, King Nebuchagnezzar II ordered the children of royal blood to eat only meat and wine for 10 days. Daniel asked that he and three other children be allowed only to eat vegetables, bread and water. After the 10 days was over, Daniel and the three children were noticeably healthier than the children who had eaten only meat and wine. Whilst this is clearly research (though as Ben Goldacre points out, probably underpowered research), the groups were not controlled. This was probably one of the first times in evolution of human species that an open uncontrolled human experiment guided a decision about public health.

James Lind is credited with the first controlled clinical trial; controlled meaning that his study included a comparison, or control, group. The comparison group received a placebo, another treatment or no treatment at all. Lind, a Scottish Naval Surgeon, conducted the first controlled clinical trial on the 20th May 1747 on a group of sailors suffering from scurvy.

He included 6 pairs of sailors in his trial; placed them all on the same diet, and then gave each of the pairs an additional intervention. One pair had a quart of cider each day; one pair took 25 drops of elixir vitriol (sulphuric acid) three times a day; one pair had 2 spoonfuls of vinegar three times a day; one pair were put on what Lind describes as a ‘course of sea-water’; one pair each had 2 oranges and 1 lemon given to them each day; and another had what’s described as a ‘bigness of a nutmeg’ three times a day.

I know which of the treatments I have preferred at that time (i.e. not a course of seawater!).

At the end of day 6 of Lind’s trial, the pair that had eaten 2 oranges and 1 lemon each day were fit for duty and taking care of the other 5 pairs of sailors. Lind notes in his book ‘Treatise on Scurvy’ (published in Edinburgh in 1753) that he thought after the citrus fruits, the cider had the best effects.

We now know scurvy is caused by a deficiency in vitamin C, hence why fruits rich in vitamin C had his sailors fighting fit again after just 6 days.

Clinical trials like James Lind’s are what we base our current practice on. Over the years since Lind found the cure for scurvy, huge advances have been made in the methodology of trials; we now have placebos, use randomisation, adhere to various codes of conduct, and work with huge groups of patients and teams of research staff across the world in an effort to answer clinical questions.

This is the first post in a series I’m calling ‘Clinical Trials Q&A’. If you have any questions about clinical trials, what they are, why we do them, what their limitations are.. etc, please pop them in a comment or tweet me @heidirgardner and I’ll be sure to answer them in upcoming blog posts.

‘I Just Don’t Want to be a Human Guinea Pig’ – Why Taking Part in Trials Isn’t What You Think

When I tell people that I’m doing a PhD in clinical trials methodology I’m usuallly greeted with one of two responses, ‘Oh right, so you’re still a student?’ or ‘Oh my god, trials? To test drugs and stuff?’. The ‘test drugs and stuff’ response isn’t usually framed in a positive light either; eventually these conversations result in mumbles of ‘human guinea pig’. So, as today is International Clinical Trials Day, I thought I’d take some time to write about why taking part in trials does not make you a human guinea pig.

Public perception of research, and a few figures

  • Estimates of the percentage of people who say they think it’s important for the NHS to participate in research vary, but are largely very high – a 2012 poll (OnePoll) gave a figure of 87% and a similar poll conducted by Ipsos MORI gave a figure of 97%
  • Only 7% of people said they’d never take part in research

So why am I having so many encounters with people using the words ‘human guinea pig’ when these positive results suggest the public are in support of research?

I think it’s the perceived risk of taking part in a trial. The word ‘trial’ doesn’t exactly reassure you that you’re signing up for something that’s unlikely to harm you – it raises questions, reinforces uncertainty, and screams ‘risk’.

This is somewhat true; we do trials because we don’t know the answer – which treatment is ‘better’, which is most cost effective, which will improve quality of life rather than simply length of life, should we avoid surgery and just go for medical management, is the short-term stint in hospital required for surgery better than a long-term physiotherapy plan? These are questions that staff working in our NHS have to ask themselves each day, whenever they see a patient. If there is no evidence for them to base their decisions on, then we really should be working to answer that question in an ethical and efficient way.

Without evidence, people are exposed to harm, and the NHS is not providing the best possible care to the public. Without trials, the world of health and social care would never, ever progress.

Clinical trials aren’t always about testing new drugs

The trials unit at Aberdeen University, CHaRT (Centre for Healthcare Randomised Trials), specialises in pragmatic trials of discrete non-drug technologies or complex interventions. What does that mean? In short, usually not drugs, and hardly ever new drugs. Non-drug technologies are just that, it could be a type of scan, a device, a diagnostic tool – the list goes on. Complex interventions are a bit trickier; the Medical Research Council defines these as interventions with several interacting components. Again the variety in this category is huge; it could be an abdominal massage like in Glasgow Caledonian’s AMBER trial, cognitive behavourial therapy, interventions aimed at groups or communities of people – basically complex interventions are just that; complex. They’re more difficult to evaluate but very useful nevertheless.

So, if you take part in a clinical trial you will not necessarily be taking a new drug. There are lots of trials, particularly publicly-funded trials, that aim to find out which of two existing interventions is most useful. By existing interventions, I mean things that are already being used in standard care, we just aren’t sure which one works best. An example would be CHaRT’s C-GALL trial, which aims to find the most effective treatment for gallstones; is it better to remove the gallbladder altogether or to go down the route of medical management – both of these approaches are used in the NHS today, and we genuinely don’t know which is the best.

What have trials done for us?

Trials are absolutely central to our healthcare system, they impact each of us all the time – without us even realising.

On a personal note, someone close to me took part in a clinical trial a few years ago. The intervention proved to be successful and they’re still regularly receiving that treatment for free because they took part in the trial for it. That’s life-changing not only for the person taking part in the trial, but their family and friends too.

Taking a step back, Cassandra Jardine was a journal for the Telegraph, she died in 2012 from lung cancer – after her diagnosis she wrote extensively about her illness, winning the Lung Cancer Journalism Award in 2011. She took part in a trial of a lung cancer vaccine that aimed to extend her life; she knew her illness would kill her, but she wanted to do something good to contribute to the advancement of medicine, and to see if she could hang on for an additional few months. I’d really recommend you read her piece here.

Eventually she came to the conclusion that she was in the placebo group (something she covers in her article as an ‘extremely rare’ trial), but despite that, she did benefit from that trial. She said:

‘Most persuasive of all is the evidence that patients on clinical trials do better than the norm because they are monitored more closely. Instead of quarterly X-rays, I have CT scans and monthly blood tests.’

Whether trials provide a direct benefit to you or a loved one, you’ll still be benefiting indirectly.

Trials have influenced clinical practice. Beta blockers and aspirin following acute myocardial infarction, calcium antagonists following non-Q-wave myocardial infarction, aspirin and heparin following unstable angina, hypertension control and lipid lowering to reduce coronary heart disease mortality… the list goes on.

Our National Health Service is admired by people around the world, and rightly so – we build in the need to evaluate interventions, we allocate public money to funding these trials, and then we change practice to ensure more people have the chance of benefiting, or less people are exposed to harm. If you are ever approached about taking part in a clinical trial, I urge you to give that researcher a chance. Let them talk you through the trial, weigh up the potential risks and benefits, and make an informed decision based on your own circumstances and feelings.

Take a look at #ICTD2017 and #WhyWeDoResearch to find out more about trials, taking part in research, and why research is so important.

4th International Clinical Trials Methodology Conference and 38th Annual Meeting of the Society for Clinical Trials – Liverpool, May 2017

This week I left the grey skies of Aberdeen in favour of… Liverpool. Nowhere hugely exotic, but the weather was absolutely beautiful for the 3 days I was there.
Anyway, more about why I was there. Sunday to Wednesday saw almost 1,000 delegates congregate in Liverpool for the joint 4th International Clinical Trials Methodology Conference (ICTMC) and 38th Annual Meeting of the Society for Clinical Trials (SCT). 2 and a half days of people interested in trials, tackling subjects like: data-sharing, registry-based trials, recruitment and retention, patient and public involvement with research, qualitative research, funding, publishing, and a tonne of other subjects besides.

I’ve been to one ICTMC before, in Glasgow in 2015, but this was a much bigger version because it was joint with the American SCT annual meeting. The days were jam packed and I came home with a notebook full of ideas. Really I think it’ll take a few weeks for me to process everything properly and start to formulate my own ideas for future research based on the priorities demonstrated at the conference.

Anyway, just a short blog post from me this week – with 3 days out of the office I’m a bit behind on my to do list! As with the SWAT workshop that we had in Aberdeen in March, I’ve consolidated the majority of my notes into a sort of mind map/cartoony page of doodles. I find that this really helps me to get to grips with what’s been talked about, and ensures that I don’t leave all my notes held captive in a notebook at the back of my desk drawers.

I shared these on Twitter earlier in the week and got a really positive response, so I thought I’d upload them here too.

Studies Within A Trial (SWAT) Workshop – Aberdeen, 23rd March 2017

I realised earlier in the week that I haven’t talked a huge amount about the other projects I’m involved with aside from my PhD work, so this week’s post is about a project linked, but not central to, my own research project.

Studies Within A Trial (SWATs) are smaller studies embedded within a host trial, largely they have the aim of investigating some methodological aspect of the way we conduct the trial. There are currently 46 SWATs listed on the SWAT repository, which mainly look at recruitment and retention of participants; the two most difficult parts of the trial process.

These types of study are notoriously difficult to get funding for, they’re often poorly understood by approvals and ethics bodies, and they tend to be the first thing to fall off the list of priorities for trial teams as they’re an ‘add-on’ – a bonus that’s not central to the aims of the overarching trial. On Thursday last week I attended a SWAT workshop led by my PhD Supervisor in Aberdeen. Other attendees included representatives from pharma, the National Institute for Health Research (NIHR), the Health Research Authority (HRA), researchers, clinicians, trial managers, patients and directors of UK Clinical Trials Units.

Our discussion was lively, wide-ranging and incredibly useful. We tackled the tricky aspects of how to gain approvals, how to get funding, and how to galvanise the trials community to embed the use SWATs in routine practice.

One thing that I found really valuable was the discussion with patient representatives; we had 2 ladies join us to give their opinions. They drew our attention to topics I hadn’t necessarily thought of before, and helped us work through how we might (or might not) explain this additional study to trial participants both at the beginning and end of the study.

Throughout the day I took lots of notes – scribbling away whilst different people were talking to ensure I didn’t miss key points. We ended up discussing how to make SWATs easier to do for around 6 hours so my pile of notes was pretty huge! Once I’d got home I read over my notes whilst the discussion was fresh in my head, and consolidated them into one side of A5.

I find this a really useful thing to do after a day at a conference or workshop – it helps me to summarise topics in my head and ensures I don’t just push my pile of notes to the back of my desk drawer to be forgotten about.
Does anyone else do this or is it just an excuse I’m making to get the best use out of my unhealthily large stationery collection…?

Getting involved with additional projects outside of the PhD has been so valuable for me – it’s helped to improve my time management skills, expanded my knowledge of health services research more generally, but most importantly it’s helped me build confidence. I really enjoyed the day, and found it useful to speak to people outside of my own little research group; we tend to agree on a lot of things so it’s refreshing to get a new perspective and be challenged on points I’d previously taken at face value.

A Trip to Oslo, Norway – February 2017

Travelling is something that I’ve always loved; I get itchy when I don’t have a trip booked – whether that’s to a new city, country or continent. I enjoy exploring new places and new cultures, and I knew that I’d like to take as many opportunities to travel from the day I started my PhD. I’ve always been clear with my Supervisor that travel is on my agenda, so both he and I can keep an eye out for opportunities/conferences etc further afield.

So far the travel aspect of my PhD hasn’t been super exciting – I’ve spent a lot of time in various cities around the UK, but no where further. That’s been fine with me though, I’ve used my holidays to explore different places instead, so far travelling to: Denmark, Thailand, Iceland and Austria. PhD-wise though, at the beginning of this month I was given the opportunity to travel to Oslo, Norway for a few days – hoorah!

image1

If you’ve never been to Oslo, I would really recommend that you do. I was a bit nervous before I went because I have never travelled to a non-English speaking country alone before, it turns out Norway is not exactly non-English speaking! The country is essentially bilingual; every time I asked if someone spoke English I was greeted with the response, “of course I do, how can I help?”. Travelling around Oslo was also incredibly simple, the metro system, buses and trams all seemed to work seamlessly. They were always on time, super clean, and very easy to navigate.

Aside from the practicalities of getting around, Oslo is such a cool place to be. After 3 days of meetings and work-related activity, my boyfriend flew out so that we could spend some time exploring Oslo together. We had such a good time! Earlier in the week everyone had been saying how awful the weather was, it was -4 degrees C and snowing on and off, but compared to Aberdeen which is often grey and rainy, the snow was a welcome change.

image3

So, why did I go out to Oslo in the first place? The trip was part of a project funded by a grant we received from the Chief Scientist Office (CSO) of Scotland last year. The project is the core of my PhD work, and aims to find out how trial teams are currently doing trial recruitment, what sort of evidence researchers need to design effective trial recruitment strategies, and how that evidence should be presented to them.

I met with colleagues at the Regional Centre for Child and Adolescent Mental Health, Eastern and Southern Norway (RBUP), and the Norwegian Knowledge Centre for the Health Services (Kunnskapssenteret), to talk about trial recruitment experiences and issues, and tools and resources that might help. The individuals I spoke to were all hugely welcoming, helpful and enthusiastic about my work – I came home feeling excited to get back to my desk and get my teeth into this PhD again. Since I came back on February 6th I’ve Skyped with a few more members of the team out in Oslo, and again they’ve been brilliant! Over the next few weeks I hope to continue to collaborate and build relationships with the team, particularly at the Norwegian Knowledge Centre for the Health Services; my research and interests align with the team there most closely.

If you’re in the process of PhD study, I’d really recommend that you try to integrate some travel into your work. Personally I think it helps with motivation and enthusiasm for your own work, but more importantly it undoubtedly strengthens the work you’re doing. Speaking with new people gives new insights into the work you’re doing, can make you think differently about the way you conduct your research, and ultimately ensures that the results of the work you’re doing have a greater impact on the research community around you.

image2

Healthcare’s Dirty Little Secret: Results From Many Clinical Trials Remain Unreliable

I wrote this article along with my PhD Supervisors, Prof Shaun Treweek and Dr Katie Gillies at the University of Aberdeen. We originally published this work on The Conversation in October 2016, and I’ve republished it here under Creative Commons licence 4.0 as I think it gives a good background to the topics and issues that my PhD is based on.


Clinical trials have been the gold standard of scientific testing ever since the Scottish naval surgeon Dr James Lind conducted the first while trying to conquer scurvy in 1747. They attract tens of billions of dollars of annual investment and researchers have published almost a million trials to date according to the most complete register, with 25,000 more each year.

Clinical trials break down into two categories: trials to ensure a treatment is fit for human use and trials to compare different existing treatments to find the most effective. The first category is funded by medical companies and mainly happens in private laboratories.

The second category is at least as important, routinely informing decisions by governments, healthcare providers and patients everywhere. It tends to take place in universities. The outlay is smaller, but hardly pocket change. For example, the National Institute of Health Research, which coordinates and funds NHS research in England, spent £74m on trials in 2014/15 alone.

Yet there is a big problem with these publicly funded trials that few will be aware of: a substantial number, perhaps almost half, produce results that are statistically uncertain. If that sounds shocking, it should do. A large amount of information about the effectiveness of treatments could be incorrect. How can this be right and what are we doing about it?

The participation problem

Clinical trials examine the effects of a drug or treatment on a suitable sample of people over an appropriate time. These effects are compared with a second set of people – the “control group” – which thinks it is receiving the same treatment but is usually taking a placebo or alternative treatment. Participants are assigned to groups at random, hence we talk about randomised controlled trials.

If there are too few participants in a trial, researchers may not be able to declare a result with certainty even if a difference is detected. Before a trial begins, it is their job to calculate the appropriate sample size using data on the minimum clinically important difference and the variation on the outcome being measured in the population being studied. They publish this along with the trial results to enable any statisticians to check their calculations.

Early-stage trials have fewer recruitment problems. Very early studies involve animals and later stages pay people well to take part and don’t need large numbers. For trials into the effectiveness of treatments, it’s more difficult both to recruit and retain people. You need many more of them and they usually have to commit to longer periods. It would be a bad use of public money to pay so many people large sums, not to mention the ethical questions around coercion.

To give one example, the Add-Aspirin trial was launched earlier this year in the UK to investigate whether aspirin can stop certain common cancers from returning after treatment. It is seeking 11,000 patients from the UK and India. Supposing it only recruits 8,000, the findings might end up being wrong. The trouble is that some of these studies are still treated as definitive despite there being too few participants to be that certain.

Image credit: wavebreakmedia
Image credit: wavebreakmedia

One large study looked at trials between 1994 and 2002 funded by two of the UK’s largest funding bodies and found that fewer than a third (31%) recruited the numbers they were seeking. Slightly over half (53%) were given an extension of time or money but still 80% never hit their target. In a follow-up of the same two funders’ activities between 2002 and 2008, 55% of the trials recruited to target. The remainder were given extensions but recruitment remained inadequate for about half.

The improvement between these studies is probably due to the UK’s Clinical Trials Units and research networks, which were introduced to improve overall trial quality by providing expertise. Even so, almost half of UK trials still appear to struggle with recruitment. Worse, the UK is a world leader in trial expertise. Elsewhere the chances of finding trial teams not following best practice are much higher.

The way forward

There is remarkably little evidence about how to do recruitment well. The only practical intervention with compelling evidence of benefit is from a forthcoming paper that shows that telephoning people who don’t respond to postal invitations, which leads to about a 6% increase in recruitment.

A couple of other interventions work but have substantial downsides, such as letting recruits know whether they’re in the control group or the main test group. Since this means dispensing with the whole idea of blind testing, a cornerstone of most clinical trials, it is arguably not worth it.

Many researchers believe the solution is to embed recruitment studies into trials to improve how we identify, approach and discuss participation with people. But with funding bodies already stretched, they focus on funding projects whose results could quickly be integrated into clinical care. Studying recruitment methodology may have huge potential but is one step removed from clinical care, so doesn’t fall into that category.

Others are working on projects to share evidence about how to recruit more effectively with trial teams more widely. For example, we are working with colleagues in Ireland and elsewhere to link research into what causes recruitment problems to new interventions designed to help.

Meanwhile, a team at the University of Bristol has developed an approach that turned recruitment completely around in some trials by basically talking to research teams to figure out potential problems. This is extremely promising but would require a sea change in researcher practice to improve results across the board.

And here we hit the underlying problem: solving recruitment doesn’t seem to be a high priority in policy terms. The UK is at the vanguard but it is slow progress. We would probably do more to improve health by funding no new treatment evaluations for a year and putting all the funding into methods research instead. Until we get to grips with this problem, we can’t be confident about much of the data that researchers are giving us. The sooner that moves to the top of the agenda, the better.