Recruiting for One-To-One Interviews: Pitfalls & Solutions

Many PhD students require participants to enable them to conduct their research; myself included. This can throw up a whole host of hurdles, barriers and stressful days. Without participants, your entire project is at risk, this rarely happens, but recruiting participants more slowly than expected can mean your project doesn’t fit your planned timeline – that is much more common. My PhD project is split into 4 sections, and the 2 most substantial parts meant I was recruiting participants for one-to-one qualitative interviews, and user-testing. For me, the recruitment process was relatively straight forward, but I know for many PhD student that is not the case. Read on for some tips and tricks that I used to ensure that poor participant recruitment didn’t break my study.

Potential pitfall: Rushing your ethics application because you want to start the approvals process as quickly as possible, meaning that you don’t spend as much time or effort filling out the recruitment section as you really should, leading to amendments further down the line.

Solution: I will repeat this to every PhD student that’s recruiting participants that I ever meet – DO LOADS OF PLANNING, DON’T RUSH IT, GET IT RIGHT THE FIRST TIME (yes, I am shouting, this is important). The ethics application is often something that people view as a hoop they need to go through before they can start their research; it’s really not just that. It’s a key part of the project, and it’s a brilliant way for you to focus your thoughts on what you’re going to be doing, and more importantly, how.

Spend time filling out the recruitment section – think about where you might find your participants, whether you’ll need to use social media, cold calling, or an email list, think about how you’ll access information, and who and how will then approach these potential participants. Talk to colleagues who have previously done studies in similar populations. Think of a back up. Think of a back up to your back up. Build in some level of flexibility that will allow you to have a Plan B (and potentially C and D) when recruitment inevitably doesn’t go as well as you’d hoped. Building in flexibility will prevent you from having to go back to ethics with an amendment later on; taking the extra time to think about your recruitment strategy will save time if that means you don’t need to submit amendment(s) later on.

Potential pitfall: You send out invites to your study, your inbox goes crazy and in the end you are left with very few people who are eligible to take part, meaning you’ve wasted time and resources, and you’re still struggling to recruit.

Solution: This solution fits nicely into my earlier point of: DO LOADS OF PLANNING, DON’T RUSH IT, GET IT RIGHT FIRST TIME (spoiler alert, planning can save a tonne of stress, reduce your caffeine intake, and make you a much nicer person to be around). So, you’ve started recruitment and you’ve sent out what feels like 4 squillion emails/leaflets etc. You get lots of responses, relatively quickly, and you’re suddenly feeling pretty smug. Then you start reading the responses and you begin to realise that hardly any of these people are eligible for your study.

This happens when you haven’t made your inclusion/exclusion criteria clear, and/or you haven’t targeted your recruitment strategy well enough. This is a hard balance to strike, but an important one.

  1. Ensure that your information sheets/leaflets are giving the right information in a clear and effective way, by running them by a representative from your target participant group before you send them out. You want to make sure that what you’re saying makes sense to the reader; often we are too close to our own research, and it takes someone else to point out when something isn’t clear.
  2. Put some effort into tailoring your recruitment strategies for this specific study – if you’re recruiting GPs into an interview study, it’s probably not a good idea to be standing in the middle of your local Tesco with a banner; it’d be much more efficient to approach your local Primary Care Research Network and ask them for help. Similarly, if you’re looking for members of the public who smoke, it might be a good idea to go with the banner idea. It all depends on your target population; the more homework you do on where there people can be found, the less time you’ll waste.

Potential pitfall: You need to recruit an additional 5 participants, you’re convinced it won’t take long, so you’ll just get this other work finished first. 3 months later every potentially eligible participant appears to have gone into hibernation and no one is answering your calls.

Right: Number of potential participants before you announce the start of recruitment. Left: Number of potential participants once you actually start recruiting. (Or Obama vs Trump inauguration crowds via Vox.com)

Solution: You guessed it, planning can save this one too. Always, always, build in more time for recruitment than you think you’ll need. You only need 10 participants, that should take a month, right? NOPE. Give yourself 2 months, 3 if you can.

Once you have all of your approvals in place, begin booking in appointments as quickly as you can – that way you have an idea of how things are going, how much time you have, and when you might need to start looking at Plan B (…and C and D). If you know of people that you are going to contact about taking part in your study, get their contact details put together in a list so that the minute your approvals are through, you can start. Do not leave recruitment to chance/hope/praying to whichever deity you believe in.

When I was first recruiting for my semi-structured interview study, I had a 3 month window that I planned to use for doing the interviews. That was just 3 months of work; it required a significant amount of work before that 3 month window began, for me to get times, dates and locations arranged with people from South Africa, Holland, Italy, the UK, Canada.. Lots of these interviews were done via Skype or over the telephone, which helped, but that still meant I needed to ensure I had private rooms booked for me to do the interviews in, and that the telephones in those rooms were enabled for international calling (lots of ‘phones at universities won’t do this by default and you’ll need to arrange it ahead of time). Those 3 months of interviews went pretty seamlessly in the end, I had a few no shows, but that’s to be expected – I managed to interview 23 participants in that 3-month period. Planning makes all the difference, and for me that meant getting to grips with various time zones, ensuring that I could do a series of face-to-face interviews over the course of one or two days, and always carrying a spare audio recorder (batteries will run out, your charger will break or get lost, and you cannot record audio using an app on your ‘phone because the ethics panel would have something to say about that).

Recruiting participants for your studies is no easy task; with PhD projects in particular, we often cannot offer incentives, and we are relying on altruism and interest in your work. Do your best to plan in advance, and make sure to build in flexibility for when things inevitably don’t go according to plan. For my study I used existing contacts, Twitter and LinkedIn; each strategy did help me to connect with participants, but the process was work-intensive and required much more time than I thought initially.

Coding Qualitative Data and Realising Just How Northern I Sound

Qualitative research is something I was completely new to at the start of my PhD. Before the PhD I was used to collecting data in the lab, and then making graphs and running statistical analysis on it to see if I could find that illusive significant P value. Now I’ve got to grips with this new method of collecting data, I much prefer it to quantitative work. Selfishly, it gives me a license to be nosey; it encourages me to ask more and more questions instead of tracking data in the hope of finding the answer. I know a lot of the science world is quantitative, so I want to use this blog as a platform to explain a bit more about qualitative research; how I deal with it and any tips I’ve picked up along the way. An earlier post, ‘What is Qualitative Research?’ will shed light on these methods for those who aren’t sure what I’m talking about, I recommend you read that and then come back to this post.

Qualitative research is used to gain an understanding of underlying reasons, opinions, motivations and experiences. It can provide context to quantitative data, it can be used to provide insights into a problem, and it can be used to help develop ideas for future research. Qualitative methods, just like quantitative methods, are varied – there are different types of interviews, document analysis, even analysis of social media conversations from places like Twitter.

I’m using semi-structured interviews it to explore trial recruitment – semi-structured meaning that I have a rough structure of the interview in my head, the questions I might ask and the topics I want to cover, but if I get into the interview and the participant says something off topic, I can go ahead and explore that too. I’m interviewing two groups of people; people who actively recruit participants into trials, and people who design the recruitment strategies for those recruiters to implement.

Getting started

I interviewed 23 participants, and interviews were an average of about an hour in length. I audio recorded all of those interviews, and paid for them to be transcribed so that I could analyse them. (Side note – if you have the budget to pay for this service, please do. If I’d had to transcribe all of my interviews I don’t think there would be a chance of me finishing this PhD!) That resulted in a huge number of words. Naturally, I left all of those words alone for a few weeks because they were a bit overwhelming and I had other projects that I could work on, but eventually I had to get started.

A qualitative researcher’s secret weapon.

My first task was to familiarise myself with the data – basically get to grips with it, understand what’s going on, and what I could pick up from each interview; this involves a lot of listening to your audio (and cringing at your own accent), a lot of reading through your transcripts (again, cringing at how many times you’ve said ‘er…’ or ‘like’), and an absolute tonne of highlighting & note taking. I also used this as an opportunity to check the accuracy of my transcripts.

After that I got started with coding. Coding is the process of putting your data (words/phrases/sentences) into categories so that it’s grouped together in themes. For this I needed to have a thematic framework – an outline of the categories (called themes) that my data would slot into – the familiarisation really helped with this part. It meant that without looking at the transcripts I could already name broad themes that my data fitted into. After a bit more of a detailed look at transcripts I had a draft framework which I went over with my supervisor. Next up, applying that framework to all of the transcripts – i.e. coding the data.

What I use to code

One thing I’ve learned is that everyone does qualitative data analysis slightly differently, it’s a very personal process and you need to try a few methods before you settle on your most effective way of working. I went with NVivo. NVivo is a software package that allows you to look at your transcripts, create themes, and then literally click and drag chunks of text into those themes. I found this to be a really good way for me to work, it meant that I could see where everything was, it was all neat and compact and I couldn’t lose data. Other people might do coding by hand – printing out transcripts and writing on the page, highlighting sections, adding post it notes etc. That would have fried my head and I’d probably have lost pieces of paper/post it notes etc somewhere between uni and my flat, so NVivo was my best option.

How I got to grips with coding

Coding is not an easy thing – at first it felt ‘too simple’, and that’s because it was. I wasn’t thinking about it enough, I was just happily clicking and dragging sentences into themes and wondering why people had told me this process was going to take ages. After a few hours of doing this I knew I was missing something. I printed out the themes I’d added data to, and it was pretty clear that I just wasn’t thinking about the data enough. I gave myself a few hours away from it, and came back. Something had clicked and it was going better. Eventually I’d coded all 23 interview transcripts – this literally took weeks – and I had my data in a manageable format.

This process was super helpful because it allowed me to condense my data. In an hour long interview not every word is going to be useful, through coding I cut out all the waffle and made a number of coherent piles of data that I could then further analyse.

Next steps

After coding comes the process of really understanding your themes; finding where and why different stakeholders have different views (or not), understanding how themes link together, getting to grips with how experiences differ between people based on their role/age/gender etc etc. There are so many layers to this process that you don’t think about if you’ve never analysed qualitative data before.

Keep an eye out for an upcoming blog post that’ll give you an insight into how the process of writing up by findings is going..

What is Qualitative Research?

In the Science world, data is often thought of as numbers. To analyse it you do calculations, find p values and produce graphs – quantitative research is a process that’s covered in every science degree. Qualitative research isn’t like that.

I did an undergraduate degree in a Medical Sciences subject, I was taught how to use statistical software packages and how to get programs like Prism to create graphs for me. I was always told that this was quantitative research, and that qualitative research was used in other fields of research. Up until starting my PhD I had no experience of qualitative research whatsoever, but it’s an important method that we can use in the world of scientific research – whether in parallel with quantitative data, or by itself.

I think it’s important to try to shed some light on using qualitative research, particularly for the benefit of other scientists that don’t use it currently, so this week’s blog post is a brief into as to what qualitative research is. Over the next few weeks I’ll be covering further qualitative topics, such as how I do data analysis, what does it mean to code data, and how do I go about that etc. Hopefully this will be interesting both to those that do qualitative research already, and those that don’t.

So, what is qualitative research?

Qualitative research is used to gain an understanding of underlying reasons, opinions, motivations and experiences. It can provide context to quantitative data, it can be used to provide insights into a problem, and it can be used to help develop ideas for future research. Qualitative methods, just like quantitative methods, are varied – there are different types of interviews, document analysis, even analysis of social media conversations from places like Twitter.

With this type of research you’re not trying to find generaliseable results; you’re trying to find out as much about your participant group as possible. The amount of times they reference something doesn’t matter, you work to include every opinion/perception/experience etc that is relevant to the topic you’re researching.

Credit: https://www.slideshare.net/avaniendra/qualitative-research-68787606

I wish I’d been exposed to qualitative research methods throughout the course of my undergraduate degree; even if you’re doing lab-based science, understanding the different experiences and thoughts of the different stakeholder groups involved is so valuable.

For me, qualitative research has been one of the most enjoyable parts of my PhD project so far. It has helped me to understand the nuances of the problem with recruitment to trials, it’s taught me that every person’s experience of trials recruitment will be slightly different – though there are themes threading throughout each of those experiences.

Pop back next week for an insight into what goes into qualitative data analysis, and confirmation that listening back to my own voice has been the most cringe-worthy experience of my PhD to date.

Doing a PhD in Health Services Research

phd061608s

As last week’s post explained, my PhD is in the field of Health Services Research and looks at the process of participant recruitment to clinical trials. My undergraduate degree was based in lab science, and as far as I know I’m the only person from my graduating cohort to leave the lab but remain in academic science. I tend to get a lot of questions about what I do now that I don’t work in a lab anymore, so this week I wanted to take some time to explain what it’s like to do a PhD in this field; the questions I get and how it’s changed the way I look at science more generally.

Why did you decide to leave ‘proper’ science?
This is one of the best things to ask me if you want to see me bite my tongue so much that it bleeds. I’m still struggling to work out whether ‘proper’ science is intended to suggest that health services research isn’t worthwhile, or if my questioner simply isn’t aware that science can, and does, take place outside of a laboratory. I’m hoping it’s the latter.

I decided to leave lab science because I didn’t feel like the work I was doing was close enough to patients. To be clear, I’m not saying lab science is not a useful or worthwhile career path, just that I work best when I’m not too many steps away from the end result.

How do interviews help your work, surely you want data and evidence?
Yes, this a real question that someone asked me a few months ago.

To explain a bit of the background – undergraduate lab science degrees don’t pay much wpid-photo-aug-12-2013-805-pmattention to qualitative research whatsoever, or at least mine didn’t. I think in first year the words ‘qualitative data’ were mentioned once, and only when explaining that everything we would do going forward would involve the opposite. The PhD very quickly taught me that evidence comes in all shapes and sizes, and interviewing people to find out about their experiences and views on specific topics is just as useful as percentages and p values – it just depends on what you want to know.

We don’t know lots of things, and the NHS isn’t always right
I’m showing my naivety here so bear with me. Before starting PhD study, I thought that if something – whether that’s a type of surgery or a new drug – is put into practice within the NHS, then there was good equality evidence to support that decision. Turns out, I was wrong. I won’t say much more on this – Margaret McCartney’s books are a good starting point if you want to find out more.

Science in the media
The biggest change I’ve noticed in myself since starting the PhD is the way I consume media reporting of scientific stories. Previously I would be cautious of ‘bad science’, understanding that some news outlets will happily sensationalise content to improve readership figures. Now though, I find myself reading stories and picking holes in them as I am reading – thinking ‘well that’s not true because…’ or ‘the data you’ve provided does not show that result…’. I’ve stopped reading health/medicine stories on certain websites, and now stick to a few that I feel comfortable relying on. Vox and The Conversation are now my go-to news sites, and I try to follow specific reporters on Twitter too. I’d recommend both Julia Belluz and Kathryn Schulz, I saw Julia give a talk at last year’s Evidence Live conference and it was clear she really cares about accurate reporting – you can see her talk on YouTube here.