Prioritising the Research Agenda for Trial Retention – Birmingham, 23rd October 2018

I’ve been in Birmingham today for the final consensus meeting of the PRioRiTy II project, so I thought I’d write a quick blog post before dinner so that you can find out a bit more about the process of research prioritisation.

Image credit: Prof Shaun Treweek

I’ve spoken on this blog before about the first PRioRiTy project, which was a prioritisation of questions around trial recruitment. That project took the same shape, though I’ve been more heavily involved in PRioRiTy II because it’s led by Dr Katie Gillies, who was one of my PhD supervisors; she’s now one of my line managers and she’s fantastic to work with. Katie was a participant in the consensus meeting for PRioRiTy, and as soon as she came back from that meeting (almost 2 years ago!), she set to work on PRioRiTy II – she moves fast because she is a complete trial methods nerd, just like me (I’ve found my people!).

So, I said she moves fast, but you might ask why it’s taken 2 years to get here. As with anything in research the process is not as simple as it looks. This prioritisation exercise was not simply a group of people getting together in a room today and deciding what we thought was most important; the process has involved many, many more people than just those in the room, and is based on the method that the James Lind Alliance (JLA) use for priority setting partnerships. The JLA method is designed to take into account the views and opinions of all stakeholders that have an interest in a specific research area. For us, that meant: trialists, methodologists, ethics committee members, researchers, patients, funders, clinicians and more, but the JLA method has been used for lots of prioritisation activities and the people involved are tailored each time to fit with the aims of the project.

We had a wonderful graphic illustrator with us today, and she captured the ‘story so far’ brilliantly in the image below. Before today we had a survey, followed by a huge amount of data analysis and question searching within the responses from that initial survey, an interim prioritisation process (some of you might have been involved with this because I posted about it here), and then came this face to face consensus meeting – so today was the culmination of a lot of views, opinions, time and effort.

A snapshot of a graphic illustration from the PRioRiTy II consensus meeting showing the story so far.

Below are a few photographs from the day – lots of serious faces, extensive discussion, and some compromises needing to be made too.

The kick off – Katherine Cowan (Senior Advisor to the JLA) did an excellent job chairing the workshop, keeping us all to time and making sure that everyone’s voice was heard within the discussions.

Image credit: Prof Shaun Treweek

In the first session of group work attendees shared their 3 most and 3 least important questions from the list of 21 that we had supplied them with in advance. From the initial survey responses we had 27 questions, which were then narrowed down to 21 questions during the interim prioritisation.

The second session of group work saw the beginning of the ranking process! Coloured tablecloths were used to distinguish questions that were most important (green), least important (red), and somewhere in the middle (yellow). This allowed participants to discuss the ranking of their group as a whole (i.e. based on the feedback from the first group session), and then physically move the questions into a more defined ranking position after discussion.

The final session – questions were laid on the floor so that the entire group could see the ranking. Katherine then went through each question in turn to ensure that the group could reach a consensus; harder than you might think!

Image credit: Trial Forge

We won’t be sharing the top 10 questions around trial retention just yet though; tomorrow we have our final Steering Group meeting (let me know if you’d like to see a blog post about what a Steering Group does within a research project!) where we will go through the top 20 questions and make sure that all the wording is clear.

We then plan on unveiling the top 10 at the Society of Clinical Trials meeting in New Orleans next May. If you’ve been reading the blog for a while you might remember that I went to the Society for Clinical Trials meeting earlier this year when it was in Portland. It’s a brilliant conference that enables trialists from around the world to meet each year to share their work. After that we also plan to do some more conference dissemination at the International Clinical Trials Methodology conference, which takes place in Brighton next October. Keep an eye out for future blog posts too – I’ll be posting the final top 10 when they’re released!

Image credit: Prof Jane Daniels

Publication Explainer: The PRioRiTy Study

Today I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually 🙂 This is the second in my ‘Publication Explainer’ series, and there are at least another 2 that I already need to write, read the first one here. As I said in that post, these explainers are a place for me to answer 3 of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Read the full paper here.

Why prioritise research questions about recruitment to trials?

Research around recruitment strategies for randomised trials is super important – though it is the premise of my entire PhD project so I would say that. Recruitment to trials is difficult, and many trials (estimates differ but average around the 45-50% mark) fail to recruit enough participants to hit their targets. Targets are not just numbers plucked from thin air, they’re based on detailed calculations performed by trained Statisticians – target figures are designed to enable researchers and trialists to see real differences in the various arms of trials. If we don’t hit target, then results of the research could be vulnerable to something called a type 2 error – which is most simply explained by the image below; it’s a false negative, meaning that we could be telling people that an intervention is effective when it isn’t, or that it isn’t effective when it is.

Clearly, recruitment is as area that requires research, but because there is so much work to be done, we are at risk of being a bit everywhere (just to be clear, ‘being a bit everywhere’ is not the technical term for this…) when it comes to focussing and making substantial progress with improving the way we do research. Going through a formal prioritisation process for the squillions of research questions that surround the process of recruitment, will enable researchers to coordinate the research that they’re doing, plan more effectively, and work together to ensure that we are answering the questions that are most important to the various stakeholder groups involved.

How did the prioritisation process work?

The process of prioritisation that enabled this project to go ahead was a development with the James Lind Alliance – the JLA works with clinicians, patients and carers ensure that all voices are heard, and that prioritisation of research questions reflects the requirements of all of these groups. The James Lind Alliance works on the premise that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

The prioritisation process begins with getting partners involved with the PRioRiTy project – this isn’t a project that can be done by one person!The stakeholders involved with this priority setting partnership were:

  • Members of the public who had been invited to participate in a randomised trial or participated in Trial Steering Committees (TSCs). They could be an individual or representing a patient organisation;
  • Front line clinical and research staff who were or had been involved in recruitment to randomised trials (e.g. postdoctoral researchers, clinicians, nurses, midwives, allied health professionals);
  • People who had established expertise in designing, conducting, analysing and reporting randomised trials (e.g. Principal Investigators/Chief Investigators);
  • People who are familiar with the trial methodology research landscape (e.g. funders, programme managers, network coordinators).

Once relevant stakeholders were identified, an initial survey with just 5 questions (below in Table 1 which is taken from the original paper) was developed and distributed to the stakeholders involved.

Responses were collated, organised, coded and analysed in order to generate a full list of research questions. This was a massive part of the work; 1,880 questions came from the 790 respondents to the initial survey. The figure below shows the process of whittling down this huge pile of questions to a manageable – and useful – top 20.

As you can see, this was an iterative process involving lots of people, views, questions – and work! I’ll just make it clear here – I was involved in a small part of this process, and the team working on the project was large; as I said before, with projects like this it’s important to involve people from lots of different backgrounds and with various levels/areas of expertise. The team was led by Prof Declan Devane and Dr Patricia Healy, both from NUI Galway, they kept the rest of us on track!

What next?

In terms of next steps for the team involved in the PRioRiTy project, it’s really important that we work to disseminate our results; after all, if no ones knows what the final list of prioritised questions is, then there was really no point in doing the project. So – with that in mind, here’s the final top 10!

To give these questions some context I wanted to talk through a few of them to go through my thoughts on what types of research may be required to answer them, and why they’re important.I’ll stick to the top 3 for this part:

Understanding how randomised trials can become part of routine care is, unsurprisingly, the top question from this entire project. Knowing how we can use clinical care pathways to ensure that patients are given the opportunity to take part in trials is a hugely important part of normalising trial recruitment, and spreading awareness of trials more generally. There is a tonne of research to be done in this area, and in my opinion, this question will need a diverse range of research angles and methods in order to answer it in a variety of ways.

This question is interesting – what information should trialists be giving to members of the public that are being invited to take part in trials? That seems like something we should have evidence for, but in actual fact we are working from hunches, experiences, and anecdote. I think this question will rightfully fuel a lot of research projects over the coming years, we need to be looking at what information potential participants want, as well as what they need form an ethical/regulatory stand point – at the moment I get the impression that we’re being driven by ethics committees and regulators, and we’re often putting in a lot of information that participants don’t want/need/find useful, because we feel it’s better to give them everything, rather than risk missing something out. I suspect that if we reduce the amount of information we provide, the understanding of that information would increase because participants are able to focus on specific pieces of information more effectively. I say that because I know that if I get a huge leaflet, I’m much more likely to avoid the entire thing because it looks overwhelming, or I don’t think I have time to get through all the information in front of me.

This question is one that I’ve been asked, and I myself have asked, numerous times over the course of my PhD. Public engagement and patient involvement are both areas of academic life that are getting increased focus; we know that involving patients and members of the public in our research can strengthen it, make the work we’re doing more relevant to the people that we’re doing it for, but could this involvement impact on recruitment rates too? I’m not sure, but I’m really interested to see the results of a few projects that are linked to this question that are currently ongoing – the PIRRIST study led by Dr Joanna Crocker is one I’ll be keeping an eye out for. The PIRRIST protocol was presented as a poster at a conference I went to in 2015, that information is published here if you’re interested in learning more.

Something to note

The appendix of the paper contains a full version of the table below, this provides details on the evidence that we already have available to us to help answer each of the top 10 questions. The top 3, which I’ve discussed above, have no evidence available – which really drives home the importance of a formal prioritisation process in highlighting where the gaps are in research evidence.

There is certainly a lot more work to be done on how we recruit participants into randomised trials – which is good for me as I want to stay in this field of research after my PhD, and hopefully get some of these questions answered over the course of my career!