Publication Explainer: Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)?

This is the third in my ‘Publication Explainer’ series, read the first and second here and here. As I have said previously, these explainers are a place for me to answer some of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Read the full paper here.

What is a SWAT?

A SWAT is a Study Within A Trial – i.e. a self-contained research study that is taking place within a clinical trial. Usually SWATs focus on a methodological aspect of a trial, e.g. evaluation of: an intervention that is designed to improve the recruitment of participants to trials; an intervention that is designed to keep participants engaged with the trial (i.e. retention of participants); or an intervention that is designed to find out more about the way that data is collected (e.g. online versus paper). Often

Why are you trying to encourage people to do SWATs?

It is important that we encourage people to do SWATs because they are so often underpowered. Statisticians can calculate the sample size needed for the results to enable us to see a difference between the two interventions; if we hit that target sample size (i.e. recruit enough participants) then the result is less likely to be down to pure chance. As sample size calculations are done for the host trial, and not the SWAT, it’s likely that the SWAT will be ‘underpowered’ – meaning that the effect that we see in the results may not be a real effect; it could be down to chance. That’s ok though, because SWATs are designed to enable the data from them to be pooled with the same SWATs that have been done in other host studies.

What are you aiming to do in this paper?

This paper is the result of a huge amount of discussion, much of which started at a face to face event that was held in Aberdeen last year, the group of authors on this papers is pretty big, and that reflects everyone that took part in that event and the discussions that came after it. As a group, we are very conscious that SWATs are one of the most obvious (and arguably, easiest) ways for us to improve the way that trials are designed and conducted; so it’s important that we encourage people to do them. It is not realistic to think that trial methodologists can do all of the SWATs that we need; there just isn’t enough of us, and we need trialists to help us. By writing and publishing this piece of guidance, we aimed to produce a one-stop paper where people could go to find out what a SWAT is easily.

Within the last few days, we’ve submitted ‘Trial Forge Guidance 2: How to decide if a further Study Within A Trial (SWAT) is needed’ to the same journal, Trials. Trials journal is currently taking part in a pilot along with a number of other journals that fall under the BioMed Central umbrella, when authors submit their papers for publication they have the option of publishing a pre-print of their work. This pre-print edition is published online within about a week, meaning that the peer review process can run along side, but the research is being disseminated more more quickly. Once that pre-print is available, I’ll share it on the blog so you can read that too 🙂

Publication Explainer: The PRioRiTy Study

Today I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually 🙂 This is the second in my ‘Publication Explainer’ series, and there are at least another 2 that I already need to write, read the first one here. As I said in that post, these explainers are a place for me to answer 3 of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Read the full paper here.

Why prioritise research questions about recruitment to trials?

Research around recruitment strategies for randomised trials is super important – though it is the premise of my entire PhD project so I would say that. Recruitment to trials is difficult, and many trials (estimates differ but average around the 45-50% mark) fail to recruit enough participants to hit their targets. Targets are not just numbers plucked from thin air, they’re based on detailed calculations performed by trained Statisticians – target figures are designed to enable researchers and trialists to see real differences in the various arms of trials. If we don’t hit target, then results of the research could be vulnerable to something called a type 2 error – which is most simply explained by the image below; it’s a false negative, meaning that we could be telling people that an intervention is effective when it isn’t, or that it isn’t effective when it is.

Clearly, recruitment is as area that requires research, but because there is so much work to be done, we are at risk of being a bit everywhere (just to be clear, ‘being a bit everywhere’ is not the technical term for this…) when it comes to focussing and making substantial progress with improving the way we do research. Going through a formal prioritisation process for the squillions of research questions that surround the process of recruitment, will enable researchers to coordinate the research that they’re doing, plan more effectively, and work together to ensure that we are answering the questions that are most important to the various stakeholder groups involved.

How did the prioritisation process work?

The process of prioritisation that enabled this project to go ahead was a development with the James Lind Alliance – the JLA works with clinicians, patients and carers ensure that all voices are heard, and that prioritisation of research questions reflects the requirements of all of these groups. The James Lind Alliance works on the premise that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

The prioritisation process begins with getting partners involved with the PRioRiTy project – this isn’t a project that can be done by one person!The stakeholders involved with this priority setting partnership were:

  • Members of the public who had been invited to participate in a randomised trial or participated in Trial Steering Committees (TSCs). They could be an individual or representing a patient organisation;
  • Front line clinical and research staff who were or had been involved in recruitment to randomised trials (e.g. postdoctoral researchers, clinicians, nurses, midwives, allied health professionals);
  • People who had established expertise in designing, conducting, analysing and reporting randomised trials (e.g. Principal Investigators/Chief Investigators);
  • People who are familiar with the trial methodology research landscape (e.g. funders, programme managers, network coordinators).

Once relevant stakeholders were identified, an initial survey with just 5 questions (below in Table 1 which is taken from the original paper) was developed and distributed to the stakeholders involved.

Responses were collated, organised, coded and analysed in order to generate a full list of research questions. This was a massive part of the work; 1,880 questions came from the 790 respondents to the initial survey. The figure below shows the process of whittling down this huge pile of questions to a manageable – and useful – top 20.

As you can see, this was an iterative process involving lots of people, views, questions – and work! I’ll just make it clear here – I was involved in a small part of this process, and the team working on the project was large; as I said before, with projects like this it’s important to involve people from lots of different backgrounds and with various levels/areas of expertise. The team was led by Prof Declan Devane and Dr Patricia Healy, both from NUI Galway, they kept the rest of us on track!

What next?

In terms of next steps for the team involved in the PRioRiTy project, it’s really important that we work to disseminate our results; after all, if no ones knows what the final list of prioritised questions is, then there was really no point in doing the project. So – with that in mind, here’s the final top 10!

To give these questions some context I wanted to talk through a few of them to go through my thoughts on what types of research may be required to answer them, and why they’re important.I’ll stick to the top 3 for this part:

Understanding how randomised trials can become part of routine care is, unsurprisingly, the top question from this entire project. Knowing how we can use clinical care pathways to ensure that patients are given the opportunity to take part in trials is a hugely important part of normalising trial recruitment, and spreading awareness of trials more generally. There is a tonne of research to be done in this area, and in my opinion, this question will need a diverse range of research angles and methods in order to answer it in a variety of ways.

This question is interesting – what information should trialists be giving to members of the public that are being invited to take part in trials? That seems like something we should have evidence for, but in actual fact we are working from hunches, experiences, and anecdote. I think this question will rightfully fuel a lot of research projects over the coming years, we need to be looking at what information potential participants want, as well as what they need form an ethical/regulatory stand point – at the moment I get the impression that we’re being driven by ethics committees and regulators, and we’re often putting in a lot of information that participants don’t want/need/find useful, because we feel it’s better to give them everything, rather than risk missing something out. I suspect that if we reduce the amount of information we provide, the understanding of that information would increase because participants are able to focus on specific pieces of information more effectively. I say that because I know that if I get a huge leaflet, I’m much more likely to avoid the entire thing because it looks overwhelming, or I don’t think I have time to get through all the information in front of me.

This question is one that I’ve been asked, and I myself have asked, numerous times over the course of my PhD. Public engagement and patient involvement are both areas of academic life that are getting increased focus; we know that involving patients and members of the public in our research can strengthen it, make the work we’re doing more relevant to the people that we’re doing it for, but could this involvement impact on recruitment rates too? I’m not sure, but I’m really interested to see the results of a few projects that are linked to this question that are currently ongoing – the PIRRIST study led by Dr Joanna Crocker is one I’ll be keeping an eye out for. The PIRRIST protocol was presented as a poster at a conference I went to in 2015, that information is published here if you’re interested in learning more.

Something to note

The appendix of the paper contains a full version of the table below, this provides details on the evidence that we already have available to us to help answer each of the top 10 questions. The top 3, which I’ve discussed above, have no evidence available – which really drives home the importance of a formal prioritisation process in highlighting where the gaps are in research evidence.

There is certainly a lot more work to be done on how we recruit participants into randomised trials – which is good for me as I want to stay in this field of research after my PhD, and hopefully get some of these questions answered over the course of my career!

Publication Explainer: Routinely Collected Data for Randomized Trials: Promises, Barriers, and Implications

This week I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually. In another post later on in the year I’ll explain what my experiences have been like as a co-author on a publication, as well as what it’s like to be a first author, but today I want to use this post as a starting point for a new series on my blog. I’ll add to this ‘Publication Explainer’ series whenever I have a new publication out, and these posts will be a place for me to answer 3 of the most common questions I’ve been asked by people around me (here I mean colleagues that haven’t worked in this field, other scientists, non-scientists.. basically anyone who doesn’t work in the same research area as I do).

What is routinely collected data?

When we’re taking about health, routinely collected data (RCD) refers to data that has been collected during routine practice – basically, the stuff that your doctor adds to your medical record. This could be height, weight, blood type, blood pressure, blood levels, drug dosages, symptom frequency… the list goes on. As technology improves, RCD can also refer to things like number of steps, time spent sitting down, time spent standing etc – the sorts of things that a fitness tracker collects.

Why should we use routinely collected data in trials?

Routinely collected data could enable us to do trials better; whether that means more cheaply, with reduced patient burden, with less work for the trial team, more quickly, more environmentally friendly.. whatever ‘better’ means. This area of research is of particular interest to me because I’m trying to solve the problem of poor recruitment to trials. Recruiting volunteers to take part in trials is difficult, and if we can somehow design trials that are integrated into existing care pathways so that patients don’t have additional clinic visits to go to, then problems with recruitment could be solved much more quickly. In theory, we could design a trial that is fully integrated into routine care – meaning that when you visit your doctor and they collect data from you, that data can go straight to the trial team without the need for the patient to come in to the clinic on a separate occasion, which is what usually happens in trials.
This has been done before, the most well-known trial being the Salford Lung Study. This pioneering study involved over 2,800 consenting patients, supported by 80 GP practices and 130 pharmacies in Salford and the surrounding Greater Manchester area. You can read more about it here.

Ease isn’t the only reason to use RCD in trials. There is a huge field of research into what we call ‘pragmatic trials’.

Every trial sits somewhere on a spectrum from ‘explanatory’ to ‘pragmatic’. ‘Explanatory’ being used to describe trials that aim to evaluate the effectiveness of an intervention (a drug, a device, a type of surgery, or a lifestyle intervention like an exercise or diet change) in a well-defined and controlled setting. ‘Pragmatic’ being used to describe trials that aim to test the effectiveness of an intervention in routine practice – i.e. some people might not take their tablets as directed, they’ll likely skip an exercise every now and again, they might forget to pick prescriptions up or get their doses mixed up – these trials reflect real life. The more pragmatic a trial is, the more likely that the results of that trial will then translate into the real world if/when the intervention is rolled out for public use. Using routinely collected data could help to ensure that trials are more pragmatic.

Why aren’t we already using routinely collected data in trials?

The idea of using routinely collected data in trials sounds perfect, right? Patients won’t have to go to clinic visits, trials will recruit more easily, therefore they’ll be completed faster and more cheaply, trials will be more pragmatic – why aren’t we already using RCD in trials?

If only it were that simple! Just because data are collected, doesn’t mean that researchers are able to access it, never mind access it in a useful format at the time that they need it. There are lots of concerns about using RCD in trials as standard, but these issues are likely to be overcome at some point in the future (as for time, that’s the big unknown – it could be 50 years, could be longer!). This is an exciting field of research, and one that I’ll be keeping a close eye on over the next few years.

BioMedCentral as a publishing group is open-access meaning that their publications are not hidden behind paywalls, if you’d like to read the full paper you can find it here.

I also wanted to flag up a blog post that Lars and Kim wrote to go along with the publication, essentially it’s a more condensed, relaxed and easy to understand version of the paper – you can read that here.