What Is Blinding (Or Masking), and Why Is It So Important?

Hoorah! Blogtober day 4, and the resurrection of my Clinical Trials Q&A series.
This is a series where I answer questions about all things trials – this is the third post in this series, previous posts have looked at the first clinical trial, and why you might choose to do a trial in favour of using other study designs (this post explains the concept of randomisation, I’d recommend reading that one if you’re not sure what randomisation is – it’ll help this one make more sense).

This post looks at a concept that’s crucial to the success of trials – blinding.

What is blinding?

Blinding, also referred to as masking, refers to “the concealment of group allocation from one or more individuals involved in the research study“. In practice, that means that if you’re taking part in a trial, you will not know what treatment arm you have been allocated to. Often, your doctor or healthcare professional will not know either.

There are various different types of bias:

Table taken from the European Patients’ Academy

Why is blinding so important?

Blinding serves to avoid bias. Sources of bias can come from participants, clinical staff and/or the trial team that’s interpreting the results.

This is not a bad thing, it’s just a thing. We’re all human, and it’s human nature to be influenced by the things that we know or believe – if we don’t know them then we can’t exert our own biases. Think about it, if you have a headache and you take a pill that says it will make your head feel better, when you do feel better you are likely to attribute it to the actions of that pill. In actual fact your headache might have just lifted of its own accord, but you’re much more likely to believe it was a result of the pill.

This idea translates to clinical trials too – if you take part in a trial that’s aiming to find the best tablets to treat a headache and you are told that you have been allocated a headache-stopping pill, you’re more likely to report that your headaches have reduced since you started taking the pill. If you don’t know what the pill is (maybe it’s a sugar pill that has no medical ability at all, maybe it contains a drug that research think will cure headaches), then you are more likely to report the truth of whether your head is still hurting or not. We are swayed by the information that we have, particularly if that information has the potential to make us feel well or unwell.

Blinding is not only important for participants; clinicians, researchers and the people analysing the trial data can also be influenced by the knowledge of which group a participant has been allocated to. If the person recruiting participants into a trial, or treating people within that trial, knows which group their participants are allocated to, their behaviour may change. These changes are often subtle and completely subconscious, but they could influence the way that the participant views the treatment and therefore influence the results of the trial.

Blinding isn’t always possible

In an ideal world every study would be triple blind – participants, clinicians and researchers would all be blind to the treatment that the participant has been allocated to. The world isn’t ideal though, and lots of the trials that are going on involve complex interventions (i.e. not something as simple as a tablet that you can easily duplicate the look and feel of to ensure allocation remains concealed). Some trials are only able to run if they are single blinded, or completely unblinded- surgical trials for example. Innovative trial designs and techniques are often incorporated in an effort to overcome potential bias in these situations.

Blinding isn’t just important in clinical trials involving humans, lab research involving anything from mice to individual cells can be blinded too! I know that lots of people reading this are involved in laboratory research – if that is you, and you are not currently using blinding to avoid bias in your studies, head to the CAMARADES website.
CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) aims to provide an easily accessible source of methodological support, mentoring, guidance, educational materials and practical assistance to those wishing to embark on systematic review and meta-analysis of data from in vivo studies; that includes providing help and support with things like blinding and randomisation. This resource is a brilliant starting point. If you’re not using techniques like blinding and randomisation in you’re research, you’re not alone. This article from The Scientist earlier this year (original publication here) suggests that more than 95 percent of the preclinical work cited by 109 clinical trial proposals lacked the hallmarks of best practices, such as randomization or blinding. It’s time to change this.

Publication Explainer: The PRioRiTy Study

Today I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually 🙂 This is the second in my ‘Publication Explainer’ series, and there are at least another 2 that I already need to write, read the first one here. As I said in that post, these explainers are a place for me to answer 3 of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Read the full paper here.

Why prioritise research questions about recruitment to trials?

Research around recruitment strategies for randomised trials is super important – though it is the premise of my entire PhD project so I would say that. Recruitment to trials is difficult, and many trials (estimates differ but average around the 45-50% mark) fail to recruit enough participants to hit their targets. Targets are not just numbers plucked from thin air, they’re based on detailed calculations performed by trained Statisticians – target figures are designed to enable researchers and trialists to see real differences in the various arms of trials. If we don’t hit target, then results of the research could be vulnerable to something called a type 2 error – which is most simply explained by the image below; it’s a false negative, meaning that we could be telling people that an intervention is effective when it isn’t, or that it isn’t effective when it is.

Clearly, recruitment is as area that requires research, but because there is so much work to be done, we are at risk of being a bit everywhere (just to be clear, ‘being a bit everywhere’ is not the technical term for this…) when it comes to focussing and making substantial progress with improving the way we do research. Going through a formal prioritisation process for the squillions of research questions that surround the process of recruitment, will enable researchers to coordinate the research that they’re doing, plan more effectively, and work together to ensure that we are answering the questions that are most important to the various stakeholder groups involved.

How did the prioritisation process work?

The process of prioritisation that enabled this project to go ahead was a development with the James Lind Alliance – the JLA works with clinicians, patients and carers ensure that all voices are heard, and that prioritisation of research questions reflects the requirements of all of these groups. The James Lind Alliance works on the premise that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

The prioritisation process begins with getting partners involved with the PRioRiTy project – this isn’t a project that can be done by one person!The stakeholders involved with this priority setting partnership were:

  • Members of the public who had been invited to participate in a randomised trial or participated in Trial Steering Committees (TSCs). They could be an individual or representing a patient organisation;
  • Front line clinical and research staff who were or had been involved in recruitment to randomised trials (e.g. postdoctoral researchers, clinicians, nurses, midwives, allied health professionals);
  • People who had established expertise in designing, conducting, analysing and reporting randomised trials (e.g. Principal Investigators/Chief Investigators);
  • People who are familiar with the trial methodology research landscape (e.g. funders, programme managers, network coordinators).

Once relevant stakeholders were identified, an initial survey with just 5 questions (below in Table 1 which is taken from the original paper) was developed and distributed to the stakeholders involved.

Responses were collated, organised, coded and analysed in order to generate a full list of research questions. This was a massive part of the work; 1,880 questions came from the 790 respondents to the initial survey. The figure below shows the process of whittling down this huge pile of questions to a manageable – and useful – top 20.

As you can see, this was an iterative process involving lots of people, views, questions – and work! I’ll just make it clear here – I was involved in a small part of this process, and the team working on the project was large; as I said before, with projects like this it’s important to involve people from lots of different backgrounds and with various levels/areas of expertise. The team was led by Prof Declan Devane and Dr Patricia Healy, both from NUI Galway, they kept the rest of us on track!

What next?

In terms of next steps for the team involved in the PRioRiTy project, it’s really important that we work to disseminate our results; after all, if no ones knows what the final list of prioritised questions is, then there was really no point in doing the project. So – with that in mind, here’s the final top 10!

To give these questions some context I wanted to talk through a few of them to go through my thoughts on what types of research may be required to answer them, and why they’re important.I’ll stick to the top 3 for this part:

Understanding how randomised trials can become part of routine care is, unsurprisingly, the top question from this entire project. Knowing how we can use clinical care pathways to ensure that patients are given the opportunity to take part in trials is a hugely important part of normalising trial recruitment, and spreading awareness of trials more generally. There is a tonne of research to be done in this area, and in my opinion, this question will need a diverse range of research angles and methods in order to answer it in a variety of ways.

This question is interesting – what information should trialists be giving to members of the public that are being invited to take part in trials? That seems like something we should have evidence for, but in actual fact we are working from hunches, experiences, and anecdote. I think this question will rightfully fuel a lot of research projects over the coming years, we need to be looking at what information potential participants want, as well as what they need form an ethical/regulatory stand point – at the moment I get the impression that we’re being driven by ethics committees and regulators, and we’re often putting in a lot of information that participants don’t want/need/find useful, because we feel it’s better to give them everything, rather than risk missing something out. I suspect that if we reduce the amount of information we provide, the understanding of that information would increase because participants are able to focus on specific pieces of information more effectively. I say that because I know that if I get a huge leaflet, I’m much more likely to avoid the entire thing because it looks overwhelming, or I don’t think I have time to get through all the information in front of me.

This question is one that I’ve been asked, and I myself have asked, numerous times over the course of my PhD. Public engagement and patient involvement are both areas of academic life that are getting increased focus; we know that involving patients and members of the public in our research can strengthen it, make the work we’re doing more relevant to the people that we’re doing it for, but could this involvement impact on recruitment rates too? I’m not sure, but I’m really interested to see the results of a few projects that are linked to this question that are currently ongoing – the PIRRIST study led by Dr Joanna Crocker is one I’ll be keeping an eye out for. The PIRRIST protocol was presented as a poster at a conference I went to in 2015, that information is published here if you’re interested in learning more.

Something to note

The appendix of the paper contains a full version of the table below, this provides details on the evidence that we already have available to us to help answer each of the top 10 questions. The top 3, which I’ve discussed above, have no evidence available – which really drives home the importance of a formal prioritisation process in highlighting where the gaps are in research evidence.

There is certainly a lot more work to be done on how we recruit participants into randomised trials – which is good for me as I want to stay in this field of research after my PhD, and hopefully get some of these questions answered over the course of my career!

Publication Explainer: Routinely Collected Data for Randomized Trials: Promises, Barriers, and Implications

This week I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually. In another post later on in the year I’ll explain what my experiences have been like as a co-author on a publication, as well as what it’s like to be a first author, but today I want to use this post as a starting point for a new series on my blog. I’ll add to this ‘Publication Explainer’ series whenever I have a new publication out, and these posts will be a place for me to answer 3 of the most common questions I’ve been asked by people around me (here I mean colleagues that haven’t worked in this field, other scientists, non-scientists.. basically anyone who doesn’t work in the same research area as I do).

What is routinely collected data?

When we’re taking about health, routinely collected data (RCD) refers to data that has been collected during routine practice – basically, the stuff that your doctor adds to your medical record. This could be height, weight, blood type, blood pressure, blood levels, drug dosages, symptom frequency… the list goes on. As technology improves, RCD can also refer to things like number of steps, time spent sitting down, time spent standing etc – the sorts of things that a fitness tracker collects.

Why should we use routinely collected data in trials?

Routinely collected data could enable us to do trials better; whether that means more cheaply, with reduced patient burden, with less work for the trial team, more quickly, more environmentally friendly.. whatever ‘better’ means. This area of research is of particular interest to me because I’m trying to solve the problem of poor recruitment to trials. Recruiting volunteers to take part in trials is difficult, and if we can somehow design trials that are integrated into existing care pathways so that patients don’t have additional clinic visits to go to, then problems with recruitment could be solved much more quickly. In theory, we could design a trial that is fully integrated into routine care – meaning that when you visit your doctor and they collect data from you, that data can go straight to the trial team without the need for the patient to come in to the clinic on a separate occasion, which is what usually happens in trials.
This has been done before, the most well-known trial being the Salford Lung Study. This pioneering study involved over 2,800 consenting patients, supported by 80 GP practices and 130 pharmacies in Salford and the surrounding Greater Manchester area. You can read more about it here.

Ease isn’t the only reason to use RCD in trials. There is a huge field of research into what we call ‘pragmatic trials’.

Every trial sits somewhere on a spectrum from ‘explanatory’ to ‘pragmatic’. ‘Explanatory’ being used to describe trials that aim to evaluate the effectiveness of an intervention (a drug, a device, a type of surgery, or a lifestyle intervention like an exercise or diet change) in a well-defined and controlled setting. ‘Pragmatic’ being used to describe trials that aim to test the effectiveness of an intervention in routine practice – i.e. some people might not take their tablets as directed, they’ll likely skip an exercise every now and again, they might forget to pick prescriptions up or get their doses mixed up – these trials reflect real life. The more pragmatic a trial is, the more likely that the results of that trial will then translate into the real world if/when the intervention is rolled out for public use. Using routinely collected data could help to ensure that trials are more pragmatic.

Why aren’t we already using routinely collected data in trials?

The idea of using routinely collected data in trials sounds perfect, right? Patients won’t have to go to clinic visits, trials will recruit more easily, therefore they’ll be completed faster and more cheaply, trials will be more pragmatic – why aren’t we already using RCD in trials?

If only it were that simple! Just because data are collected, doesn’t mean that researchers are able to access it, never mind access it in a useful format at the time that they need it. There are lots of concerns about using RCD in trials as standard, but these issues are likely to be overcome at some point in the future (as for time, that’s the big unknown – it could be 50 years, could be longer!). This is an exciting field of research, and one that I’ll be keeping a close eye on over the next few years.

BioMedCentral as a publishing group is open-access meaning that their publications are not hidden behind paywalls, if you’d like to read the full paper you can find it here.

I also wanted to flag up a blog post that Lars and Kim wrote to go along with the publication, essentially it’s a more condensed, relaxed and easy to understand version of the paper – you can read that here.