This is the second in a series of posts I’m called ‘Clinical Trials Q&A‘. In this series I want to answer any questions people have about trials – from the basic to the obscure and everything in between – with the aim of demystifying the subject. I asked a few friends who don’t work in a trials environment what they don’t know about trials, and ‘why’ was the response I got back from lots of people. Read on to find out why we spend so much time, energy and resources conducting clinical trials in favour of other, often cheaper studies.
Starting with an example always makes things so much easier to explain. So let’s imagine that we are trying to find out if carrots can help you to see in the dark. This is something lots of children are told by members of the older generation in the UK – though it started out as a genuine claim made by the UK Ministry of Food during World War II. The British government backed a propoganda campaign (image left) designed to drum up public enthusiasm for the inexpensive vegetable as a substitute for costly and limited rationed goods.
So, does eating carrots improve your vision at night? To look into this we can divide participants in our study into two groups: those who already eat carrots and those who do not. We can collect data on how well each of these groups of people can see in the dark.
The results show that people who eat carrots can see better in the dark than those who do not.
Should we conclude then that carrots give you night-vision? Basically, no. This research method has found correlation rather than causation.
In order to find causation, or in other words, to prove that carrots can cause better vision in the dark, we need to introduce a methodological concept called randomisation. In our carrot example, this would mean allocating people into groups using a random method (a random number generator for example). Allocating people into groups rather than separating them into groups based on what their habits already are, means that the groups are thoroughly mixed with people that usually eat carrots, and others that don’t.
Randomisation also prevents selection bias from creeping in. Selection bias is a type of bias caused by selecting non-random data for analysis. In our carrot example this could be reflected in only women being in the carrot-eating group, and only men in the non-carrot-eating group. In trials, we need all of the groups to represent a mix of people; weight, height, sex, ethnicity, and disease history are all examples of confounders; things that could influence how our participants react to our treatment. Making sure each of the groups has a mix of characteristics effectively negates the confounders as they are spread out across the groups.
Now that we have randomised our study we can see no difference in the visual acuity of the carrot and non-carrot groups in the dark.
Clearly, this example involving carrots is much more simple than the complex medical questions that clinical trials are being used to answer. The point is though, that if something as simple as randomly allocating individuals to each of the treatment groups can dramatically change the results, then the design of a study is incredibly important to the research process.
This is not to say that other study types are not useful; observational studies like the one in the first part of our carrot example (i.e. without randomisation) are sometimes the only option for specific diseases and medical issues. Qualitative research methods are useful for shedding light on why something happens, and the factors that go into decision-making for example. For finding cause and effect, which is what we need when we’re working to ascertain the safety and efficacy of medical interventions, clinical trials are the gold standard method of evidence generation. Clearly trials are not perfect – if they were I wouldn’t be studying trials methods for an entire PhD, but they are the best thing we currently have to find causation.
One thought on “Why Do Clinical Trials in Favour of Other Studies?”