Publication Explainer: The PRioRiTy Study

Today I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually 🙂 This is the second in my ‘Publication Explainer’ series, and there are at least another 2 that I already need to write, read the first one here. As I said in that post, these explainers are a place for me to answer 3 of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Read the full paper here.

Why prioritise research questions about recruitment to trials?

Research around recruitment strategies for randomised trials is super important – though it is the premise of my entire PhD project so I would say that. Recruitment to trials is difficult, and many trials (estimates differ but average around the 45-50% mark) fail to recruit enough participants to hit their targets. Targets are not just numbers plucked from thin air, they’re based on detailed calculations performed by trained Statisticians – target figures are designed to enable researchers and trialists to see real differences in the various arms of trials. If we don’t hit target, then results of the research could be vulnerable to something called a type 2 error – which is most simply explained by the image below; it’s a false negative, meaning that we could be telling people that an intervention is effective when it isn’t, or that it isn’t effective when it is.

Clearly, recruitment is as area that requires research, but because there is so much work to be done, we are at risk of being a bit everywhere (just to be clear, ‘being a bit everywhere’ is not the technical term for this…) when it comes to focussing and making substantial progress with improving the way we do research. Going through a formal prioritisation process for the squillions of research questions that surround the process of recruitment, will enable researchers to coordinate the research that they’re doing, plan more effectively, and work together to ensure that we are answering the questions that are most important to the various stakeholder groups involved.

How did the prioritisation process work?

The process of prioritisation that enabled this project to go ahead was a development with the James Lind Alliance – the JLA works with clinicians, patients and carers ensure that all voices are heard, and that prioritisation of research questions reflects the requirements of all of these groups. The James Lind Alliance works on the premise that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

The prioritisation process begins with getting partners involved with the PRioRiTy project – this isn’t a project that can be done by one person!The stakeholders involved with this priority setting partnership were:

  • Members of the public who had been invited to participate in a randomised trial or participated in Trial Steering Committees (TSCs). They could be an individual or representing a patient organisation;
  • Front line clinical and research staff who were or had been involved in recruitment to randomised trials (e.g. postdoctoral researchers, clinicians, nurses, midwives, allied health professionals);
  • People who had established expertise in designing, conducting, analysing and reporting randomised trials (e.g. Principal Investigators/Chief Investigators);
  • People who are familiar with the trial methodology research landscape (e.g. funders, programme managers, network coordinators).

Once relevant stakeholders were identified, an initial survey with just 5 questions (below in Table 1 which is taken from the original paper) was developed and distributed to the stakeholders involved.

Responses were collated, organised, coded and analysed in order to generate a full list of research questions. This was a massive part of the work; 1,880 questions came from the 790 respondents to the initial survey. The figure below shows the process of whittling down this huge pile of questions to a manageable – and useful – top 20.

As you can see, this was an iterative process involving lots of people, views, questions – and work! I’ll just make it clear here – I was involved in a small part of this process, and the team working on the project was large; as I said before, with projects like this it’s important to involve people from lots of different backgrounds and with various levels/areas of expertise. The team was led by Prof Declan Devane and Dr Patricia Healy, both from NUI Galway, they kept the rest of us on track!

What next?

In terms of next steps for the team involved in the PRioRiTy project, it’s really important that we work to disseminate our results; after all, if no ones knows what the final list of prioritised questions is, then there was really no point in doing the project. So – with that in mind, here’s the final top 10!

To give these questions some context I wanted to talk through a few of them to go through my thoughts on what types of research may be required to answer them, and why they’re important.I’ll stick to the top 3 for this part:

Understanding how randomised trials can become part of routine care is, unsurprisingly, the top question from this entire project. Knowing how we can use clinical care pathways to ensure that patients are given the opportunity to take part in trials is a hugely important part of normalising trial recruitment, and spreading awareness of trials more generally. There is a tonne of research to be done in this area, and in my opinion, this question will need a diverse range of research angles and methods in order to answer it in a variety of ways.

This question is interesting – what information should trialists be giving to members of the public that are being invited to take part in trials? That seems like something we should have evidence for, but in actual fact we are working from hunches, experiences, and anecdote. I think this question will rightfully fuel a lot of research projects over the coming years, we need to be looking at what information potential participants want, as well as what they need form an ethical/regulatory stand point – at the moment I get the impression that we’re being driven by ethics committees and regulators, and we’re often putting in a lot of information that participants don’t want/need/find useful, because we feel it’s better to give them everything, rather than risk missing something out. I suspect that if we reduce the amount of information we provide, the understanding of that information would increase because participants are able to focus on specific pieces of information more effectively. I say that because I know that if I get a huge leaflet, I’m much more likely to avoid the entire thing because it looks overwhelming, or I don’t think I have time to get through all the information in front of me.

This question is one that I’ve been asked, and I myself have asked, numerous times over the course of my PhD. Public engagement and patient involvement are both areas of academic life that are getting increased focus; we know that involving patients and members of the public in our research can strengthen it, make the work we’re doing more relevant to the people that we’re doing it for, but could this involvement impact on recruitment rates too? I’m not sure, but I’m really interested to see the results of a few projects that are linked to this question that are currently ongoing – the PIRRIST study led by Dr Joanna Crocker is one I’ll be keeping an eye out for. The PIRRIST protocol was presented as a poster at a conference I went to in 2015, that information is published here if you’re interested in learning more.

Something to note

The appendix of the paper contains a full version of the table below, this provides details on the evidence that we already have available to us to help answer each of the top 10 questions. The top 3, which I’ve discussed above, have no evidence available – which really drives home the importance of a formal prioritisation process in highlighting where the gaps are in research evidence.

There is certainly a lot more work to be done on how we recruit participants into randomised trials – which is good for me as I want to stay in this field of research after my PhD, and hopefully get some of these questions answered over the course of my career!

Recruiting for One-To-One Interviews: Pitfalls & Solutions

Many PhD students require participants to enable them to conduct their research; myself included. This can throw up a whole host of hurdles, barriers and stressful days. Without participants, your entire project is at risk, this rarely happens, but recruiting participants more slowly than expected can mean your project doesn’t fit your planned timeline – that is much more common. My PhD project is split into 4 sections, and the 2 most substantial parts meant I was recruiting participants for one-to-one qualitative interviews, and user-testing. For me, the recruitment process was relatively straight forward, but I know for many PhD student that is not the case. Read on for some tips and tricks that I used to ensure that poor participant recruitment didn’t break my study.

Potential pitfall: Rushing your ethics application because you want to start the approvals process as quickly as possible, meaning that you don’t spend as much time or effort filling out the recruitment section as you really should, leading to amendments further down the line.

Solution: I will repeat this to every PhD student that’s recruiting participants that I ever meet – DO LOADS OF PLANNING, DON’T RUSH IT, GET IT RIGHT THE FIRST TIME (yes, I am shouting, this is important). The ethics application is often something that people view as a hoop they need to go through before they can start their research; it’s really not just that. It’s a key part of the project, and it’s a brilliant way for you to focus your thoughts on what you’re going to be doing, and more importantly, how.

Spend time filling out the recruitment section – think about where you might find your participants, whether you’ll need to use social media, cold calling, or an email list, think about how you’ll access information, and who and how will then approach these potential participants. Talk to colleagues who have previously done studies in similar populations. Think of a back up. Think of a back up to your back up. Build in some level of flexibility that will allow you to have a Plan B (and potentially C and D) when recruitment inevitably doesn’t go as well as you’d hoped. Building in flexibility will prevent you from having to go back to ethics with an amendment later on; taking the extra time to think about your recruitment strategy will save time if that means you don’t need to submit amendment(s) later on.

Potential pitfall: You send out invites to your study, your inbox goes crazy and in the end you are left with very few people who are eligible to take part, meaning you’ve wasted time and resources, and you’re still struggling to recruit.

Solution: This solution fits nicely into my earlier point of: DO LOADS OF PLANNING, DON’T RUSH IT, GET IT RIGHT FIRST TIME (spoiler alert, planning can save a tonne of stress, reduce your caffeine intake, and make you a much nicer person to be around). So, you’ve started recruitment and you’ve sent out what feels like 4 squillion emails/leaflets etc. You get lots of responses, relatively quickly, and you’re suddenly feeling pretty smug. Then you start reading the responses and you begin to realise that hardly any of these people are eligible for your study.

This happens when you haven’t made your inclusion/exclusion criteria clear, and/or you haven’t targeted your recruitment strategy well enough. This is a hard balance to strike, but an important one.

  1. Ensure that your information sheets/leaflets are giving the right information in a clear and effective way, by running them by a representative from your target participant group before you send them out. You want to make sure that what you’re saying makes sense to the reader; often we are too close to our own research, and it takes someone else to point out when something isn’t clear.
  2. Put some effort into tailoring your recruitment strategies for this specific study – if you’re recruiting GPs into an interview study, it’s probably not a good idea to be standing in the middle of your local Tesco with a banner; it’d be much more efficient to approach your local Primary Care Research Network and ask them for help. Similarly, if you’re looking for members of the public who smoke, it might be a good idea to go with the banner idea. It all depends on your target population; the more homework you do on where there people can be found, the less time you’ll waste.

Potential pitfall: You need to recruit an additional 5 participants, you’re convinced it won’t take long, so you’ll just get this other work finished first. 3 months later every potentially eligible participant appears to have gone into hibernation and no one is answering your calls.

Right: Number of potential participants before you announce the start of recruitment. Left: Number of potential participants once you actually start recruiting. (Or Obama vs Trump inauguration crowds via Vox.com)

Solution: You guessed it, planning can save this one too. Always, always, build in more time for recruitment than you think you’ll need. You only need 10 participants, that should take a month, right? NOPE. Give yourself 2 months, 3 if you can.

Once you have all of your approvals in place, begin booking in appointments as quickly as you can – that way you have an idea of how things are going, how much time you have, and when you might need to start looking at Plan B (…and C and D). If you know of people that you are going to contact about taking part in your study, get their contact details put together in a list so that the minute your approvals are through, you can start. Do not leave recruitment to chance/hope/praying to whichever deity you believe in.

When I was first recruiting for my semi-structured interview study, I had a 3 month window that I planned to use for doing the interviews. That was just 3 months of work; it required a significant amount of work before that 3 month window began, for me to get times, dates and locations arranged with people from South Africa, Holland, Italy, the UK, Canada.. Lots of these interviews were done via Skype or over the telephone, which helped, but that still meant I needed to ensure I had private rooms booked for me to do the interviews in, and that the telephones in those rooms were enabled for international calling (lots of ‘phones at universities won’t do this by default and you’ll need to arrange it ahead of time). Those 3 months of interviews went pretty seamlessly in the end, I had a few no shows, but that’s to be expected – I managed to interview 23 participants in that 3-month period. Planning makes all the difference, and for me that meant getting to grips with various time zones, ensuring that I could do a series of face-to-face interviews over the course of one or two days, and always carrying a spare audio recorder (batteries will run out, your charger will break or get lost, and you cannot record audio using an app on your ‘phone because the ethics panel would have something to say about that).

Recruiting participants for your studies is no easy task; with PhD projects in particular, we often cannot offer incentives, and we are relying on altruism and interest in your work. Do your best to plan in advance, and make sure to build in flexibility for when things inevitably don’t go according to plan. For my study I used existing contacts, Twitter and LinkedIn; each strategy did help me to connect with participants, but the process was work-intensive and required much more time than I thought initially.