Inspiring People: Margaret McCartney

Margaret McCartney

A few months ago I decided to start a series of blog posts called ‘Inspiring People’. The idea was triggered by the death of Doug Altman; I wanted to tell you about the people that inspire me. Some of them will be researchers, some clinicians, some artists, some patients, and everything in between – hopefully the blog posts will give you an idea of how I approach the research that I do, where I get inspiration from, and who I respect and admire. You might even find a few new sources of inspiration for yourself too!

Today’s inspiring person is Dr Margaret McCartney; she’s a GP based in Glasgow, former columnist for the British Medical Journal, broadcaster for Radio 4’s Inside Health programme, and a fierce advocate for the NHS. She’s also the author of various books focussing on patient health and the NHS – including The Patient Paradox that I’ve read and recommended here.

Why does Margaret McCartney inspire me?

In the post about Doug Altman I talked about the first conference presentation I gave, and how Doug’s laughter and encouragement from the audience settled my nerves. At that same conference, I saw Margaret McCartney speak for the first time. Her presentation was absolutely brilliant. She talked about death, about how we as a society need to accept the inevitability of death, and how we should be working to make death a more dignified process rather than working to keep people alive at any cost. It’s weird to think that listening to Margaret’s talk caused me to really think about death for the first time; we will all die, we have all known someone who has died, and yet we avoid the subject. I left that talk feeling inspired, humbled, and ready to buy every book Margaret has ever written.

Aside from the fact that she talks about really important, and often taboo, subjects, she talks about them in an accessible way – it’s a no holds barred approach, provocative without being actively confrontational. Listening to her, you can tell that she doesn’t take any shit, but she is so honest, intelligent and eloquent, that it’s difficult to pick any holes in her argument.

The video below is one of Margaret’s fantastic talks – this one from 2014 at the Centre for Evidence Based Medicine at the University of Oxford. In this talk she’s discussing screening tests and how the process of having a screening test should not be something that patients go into without knowledge – screening tests have implications and therefore need thought and consideration before the decision to have one (or not) is made. People need to have information available to them in order for them to make the decisions that are right for them.

Find out more

If you’d like to find out more about Margaret McCartney’s work, I’d recommend starting with the sources below:

Margaret McCartney’s blog, her Twitter, and her books

Articles from the BMJ:
Medicine must do better on gender
A new era of consumerist private GP services
If you don’t pay for it you are the product
Can we now talk openly about the risks of screening?
If screening is worth doing, it’s worth doing well
The NHS shouldn’t have to pick up the bill for private screening tests
Hiding and seeking doctors’ conflicts of interest
We need another vote

If you only have time to read one thing, make it this:
A summary of four and a half years of columns in one column

As a researcher, I appreciate her brutal honesty; as a patient, I appreciate her ability to communicate; and as a tax payer, I appreciate her constant push for transparency in the way that our healthcare system is funded, skewed and tainted by industry influence and political games.

Doing a PhD in Health Services Research

phd061608s

As last week’s post explained, my PhD is in the field of Health Services Research and looks at the process of participant recruitment to clinical trials. My undergraduate degree was based in lab science, and as far as I know I’m the only person from my graduating cohort to leave the lab but remain in academic science. I tend to get a lot of questions about what I do now that I don’t work in a lab anymore, so this week I wanted to take some time to explain what it’s like to do a PhD in this field; the questions I get and how it’s changed the way I look at science more generally.

Why did you decide to leave ‘proper’ science?
This is one of the best things to ask me if you want to see me bite my tongue so much that it bleeds. I’m still struggling to work out whether ‘proper’ science is intended to suggest that health services research isn’t worthwhile, or if my questioner simply isn’t aware that science can, and does, take place outside of a laboratory. I’m hoping it’s the latter.

I decided to leave lab science because I didn’t feel like the work I was doing was close enough to patients. To be clear, I’m not saying lab science is not a useful or worthwhile career path, just that I work best when I’m not too many steps away from the end result.

How do interviews help your work, surely you want data and evidence?
Yes, this a real question that someone asked me a few months ago.

To explain a bit of the background – undergraduate lab science degrees don’t pay much wpid-photo-aug-12-2013-805-pmattention to qualitative research whatsoever, or at least mine didn’t. I think in first year the words ‘qualitative data’ were mentioned once, and only when explaining that everything we would do going forward would involve the opposite. The PhD very quickly taught me that evidence comes in all shapes and sizes, and interviewing people to find out about their experiences and views on specific topics is just as useful as percentages and p values – it just depends on what you want to know.

We don’t know lots of things, and the NHS isn’t always right
I’m showing my naivety here so bear with me. Before starting PhD study, I thought that if something – whether that’s a type of surgery or a new drug – is put into practice within the NHS, then there was good equality evidence to support that decision. Turns out, I was wrong. I won’t say much more on this – Margaret McCartney’s books are a good starting point if you want to find out more.

Science in the media
The biggest change I’ve noticed in myself since starting the PhD is the way I consume media reporting of scientific stories. Previously I would be cautious of ‘bad science’, understanding that some news outlets will happily sensationalise content to improve readership figures. Now though, I find myself reading stories and picking holes in them as I am reading – thinking ‘well that’s not true because…’ or ‘the data you’ve provided does not show that result…’. I’ve stopped reading health/medicine stories on certain websites, and now stick to a few that I feel comfortable relying on. Vox and The Conversation are now my go-to news sites, and I try to follow specific reporters on Twitter too. I’d recommend both Julia Belluz and Kathryn Schulz, I saw Julia give a talk at last year’s Evidence Live conference and it was clear she really cares about accurate reporting – you can see her talk on YouTube here.

Healthcare’s Dirty Little Secret: Results From Many Clinical Trials Remain Unreliable

I wrote this article along with my PhD Supervisors, Prof Shaun Treweek and Dr Katie Gillies at the University of Aberdeen. We originally published this work on The Conversation in October 2016, and I’ve republished it here under Creative Commons licence 4.0 as I think it gives a good background to the topics and issues that my PhD is based on.


Clinical trials have been the gold standard of scientific testing ever since the Scottish naval surgeon Dr James Lind conducted the first while trying to conquer scurvy in 1747. They attract tens of billions of dollars of annual investment and researchers have published almost a million trials to date according to the most complete register, with 25,000 more each year.

Clinical trials break down into two categories: trials to ensure a treatment is fit for human use and trials to compare different existing treatments to find the most effective. The first category is funded by medical companies and mainly happens in private laboratories.

The second category is at least as important, routinely informing decisions by governments, healthcare providers and patients everywhere. It tends to take place in universities. The outlay is smaller, but hardly pocket change. For example, the National Institute of Health Research, which coordinates and funds NHS research in England, spent £74m on trials in 2014/15 alone.

Yet there is a big problem with these publicly funded trials that few will be aware of: a substantial number, perhaps almost half, produce results that are statistically uncertain. If that sounds shocking, it should do. A large amount of information about the effectiveness of treatments could be incorrect. How can this be right and what are we doing about it?

The participation problem

Clinical trials examine the effects of a drug or treatment on a suitable sample of people over an appropriate time. These effects are compared with a second set of people – the “control group” – which thinks it is receiving the same treatment but is usually taking a placebo or alternative treatment. Participants are assigned to groups at random, hence we talk about randomised controlled trials.

If there are too few participants in a trial, researchers may not be able to declare a result with certainty even if a difference is detected. Before a trial begins, it is their job to calculate the appropriate sample size using data on the minimum clinically important difference and the variation on the outcome being measured in the population being studied. They publish this along with the trial results to enable any statisticians to check their calculations.

Early-stage trials have fewer recruitment problems. Very early studies involve animals and later stages pay people well to take part and don’t need large numbers. For trials into the effectiveness of treatments, it’s more difficult both to recruit and retain people. You need many more of them and they usually have to commit to longer periods. It would be a bad use of public money to pay so many people large sums, not to mention the ethical questions around coercion.

To give one example, the Add-Aspirin trial was launched earlier this year in the UK to investigate whether aspirin can stop certain common cancers from returning after treatment. It is seeking 11,000 patients from the UK and India. Supposing it only recruits 8,000, the findings might end up being wrong. The trouble is that some of these studies are still treated as definitive despite there being too few participants to be that certain.

Image credit: wavebreakmedia
Image credit: wavebreakmedia

One large study looked at trials between 1994 and 2002 funded by two of the UK’s largest funding bodies and found that fewer than a third (31%) recruited the numbers they were seeking. Slightly over half (53%) were given an extension of time or money but still 80% never hit their target. In a follow-up of the same two funders’ activities between 2002 and 2008, 55% of the trials recruited to target. The remainder were given extensions but recruitment remained inadequate for about half.

The improvement between these studies is probably due to the UK’s Clinical Trials Units and research networks, which were introduced to improve overall trial quality by providing expertise. Even so, almost half of UK trials still appear to struggle with recruitment. Worse, the UK is a world leader in trial expertise. Elsewhere the chances of finding trial teams not following best practice are much higher.

The way forward

There is remarkably little evidence about how to do recruitment well. The only practical intervention with compelling evidence of benefit is from a forthcoming paper that shows that telephoning people who don’t respond to postal invitations, which leads to about a 6% increase in recruitment.

A couple of other interventions work but have substantial downsides, such as letting recruits know whether they’re in the control group or the main test group. Since this means dispensing with the whole idea of blind testing, a cornerstone of most clinical trials, it is arguably not worth it.

Many researchers believe the solution is to embed recruitment studies into trials to improve how we identify, approach and discuss participation with people. But with funding bodies already stretched, they focus on funding projects whose results could quickly be integrated into clinical care. Studying recruitment methodology may have huge potential but is one step removed from clinical care, so doesn’t fall into that category.

Others are working on projects to share evidence about how to recruit more effectively with trial teams more widely. For example, we are working with colleagues in Ireland and elsewhere to link research into what causes recruitment problems to new interventions designed to help.

Meanwhile, a team at the University of Bristol has developed an approach that turned recruitment completely around in some trials by basically talking to research teams to figure out potential problems. This is extremely promising but would require a sea change in researcher practice to improve results across the board.

And here we hit the underlying problem: solving recruitment doesn’t seem to be a high priority in policy terms. The UK is at the vanguard but it is slow progress. We would probably do more to improve health by funding no new treatment evaluations for a year and putting all the funding into methods research instead. Until we get to grips with this problem, we can’t be confident about much of the data that researchers are giving us. The sooner that moves to the top of the agenda, the better.