Publication Explainer: Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)?

This is the third in my ‘Publication Explainer’ series, read the first and second here and here. As I have said previously, these explainers are a place for me to answer some of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Read the full paper here.

What is a SWAT?

A SWAT is a Study Within A Trial – i.e. a self-contained research study that is taking place within a clinical trial. Usually SWATs focus on a methodological aspect of a trial, e.g. evaluation of: an intervention that is designed to improve the recruitment of participants to trials; an intervention that is designed to keep participants engaged with the trial (i.e. retention of participants); or an intervention that is designed to find out more about the way that data is collected (e.g. online versus paper). Often

Why are you trying to encourage people to do SWATs?

It is important that we encourage people to do SWATs because they are so often underpowered. Statisticians can calculate the sample size needed for the results to enable us to see a difference between the two interventions; if we hit that target sample size (i.e. recruit enough participants) then the result is less likely to be down to pure chance. As sample size calculations are done for the host trial, and not the SWAT, it’s likely that the SWAT will be ‘underpowered’ – meaning that the effect that we see in the results may not be a real effect; it could be down to chance. That’s ok though, because SWATs are designed to enable the data from them to be pooled with the same SWATs that have been done in other host studies.

What are you aiming to do in this paper?

This paper is the result of a huge amount of discussion, much of which started at a face to face event that was held in Aberdeen last year, the group of authors on this papers is pretty big, and that reflects everyone that took part in that event and the discussions that came after it. As a group, we are very conscious that SWATs are one of the most obvious (and arguably, easiest) ways for us to improve the way that trials are designed and conducted; so it’s important that we encourage people to do them. It is not realistic to think that trial methodologists can do all of the SWATs that we need; there just isn’t enough of us, and we need trialists to help us. By writing and publishing this piece of guidance, we aimed to produce a one-stop paper where people could go to find out what a SWAT is easily.

Within the last few days, we’ve submitted ‘Trial Forge Guidance 2: How to decide if a further Study Within A Trial (SWAT) is needed’ to the same journal, Trials. Trials journal is currently taking part in a pilot along with a number of other journals that fall under the BioMed Central umbrella, when authors submit their papers for publication they have the option of publishing a pre-print of their work. This pre-print edition is published online within about a week, meaning that the peer review process can run along side, but the research is being disseminated more more quickly. Once that pre-print is available, I’ll share it on the blog so you can read that too 🙂

8 Reasons You Should Take Part in a Clinical Trial

I originally wrote this post for the What Culture website when they were first launching the Science section of the website, but I wanted to post it here so that I have it on my own blog too.


Clinical trial participation – probably the easiest way of changing the world.

Clinical trials are a critical part of scientific research; they allow us to make sure new products and devices to manage, prevent, treat or detect disease are beneficial and safe for human use.

Thousands of clinical trials are completed every year spanning hundreds of countries around the world. The results of these trials allow governments to make decisions on health budgets, and doctors to make decisions on which drug or device is best for their patients. Patients can also use the results of clinical trials to make choices about their own healthcare plan. Trials may test drugs or combinations of drugs, surgical procedures or devices, ways to screen patients for diagnosis, and care procedures. Each and every clinical trial requires human participants to take part in the study in order to test these new medicines and procedures, but it’s very difficult to find people to sign up. Trials can be abandoned if enough people don’t sign up to participate, and if that happens then answers to the research question the trial aimed to answer will remain a mystery.

Trials are hugely important to human health and disease; without them we would be unable to move science forward, and ultimately we would be unable to save lives. Why should you be the one to sign up for a trial though, is there any way you can benefit from taking part in a clinical trial? Read on to find out my top 8 reasons to say ‘yes’ to trial participation!

1. It’s a brilliant excuse

Have you had a really busy week at work? Don’t fancy that big night out you’ve got planned and need a decent excuse so your pals will get off your back? They can’t exactly try and twist your arm if you declare you must remain sofa-bound because science said so.

Try, “I’m taking part in a potentially world-changing clinical trial and I must refrain from intense movement (e.g. throwing your usual wild shapes on the dancefloor) and drinking alcohol in excess (e.g. the inevitable 3am jagers you’re known for).”

Other excellent uses for taking part in a trial as an excuse include:

  • Getting out of jobs your partner’s been nagging you about for months (No it’s definitely not ok for you to be doing DIY or unblocking the drains or really anything – much too strenuous)
  • Doing extra stuff outside of work (You can’t possibly stay late, you have a clinic visit to attend)
  • Jury duty (You’re trying to cure cancer and they want you to sit and listen to a minor theft case for 4 days? Nae chance)
2. You can make money!

Each clinical trial is different, and your level of involvement will depend on the type of study, what disease the researcher is working on, and the type of intervention you receive – for example, surgical procedures will take longer than giving you a new type of pill to swallow.

Some trials require very little input from you; you may need to keep a food diary or pop in to see a nurse once every few months. For trials like this where you’re not inconvenienced too much you might get a little treat, a notebook or a few pounds to get yourself a coffee on the way out of the hospital.

For other trials though participants are required to be much more involved; these more intense types of trial can require you to stay in hospital for a few days at a time, attend multiple clinic visits or change the way you live day-to-day. These types of study often pay you a higher sum of money as researchers realise you may need to take time off from work or university. These high paying trials are very popular with unemployed people and students looking to make some extra money.

I will say however, taking part in a trial should not be a decision you take lightly – money is a benefit, not a motivator!

3. You’ll help researchers sleep at night

Trials may test drugs or combinations of drugs, surgical procedures or devices, ways to screen patients for diagnosis, and care procedures. Each and every clinical trial requires human participants to take part in the study in order to test these new medicines and procedures, but it’s very difficult to find people to sign up.

In practical terms, not recruiting enough participants is a Very Bad thing for science. In the very worst cases trials can be abandoned if enough people don’t sign up to participate, and if that happens then answers to the research question the trial aimed to answer will remain a mystery.

Thankfully trial abandonment is rare. In more common cases though, researchers manage to recruit between 60 and 80% of the people they’d hoped to – you’re thinking that’s not so bad, right? It’s not good, that’s for sure; without the target number of participants, the results of a study could actually give us incorrect information. Designing and managing a clinical trial is hard work; there are multiple areas where the study could miss targets and exceed budgets. Recruitment is the most common pitfall; getting you guys involved in their trials is the one thing that keeps researchers awake at night.

Take part in a clinical trial and reduce stress levels of a researcher immeasurably – their families will thank you for it.

4. To find out about your own health

If you’re one of those lucky people who is rarely ill, finding out stuff about your own health can super interesting.

Maybe you’re interested to know what your blood type is; a trial that involves taking a blood sample from participants (a super common thing for trials to ask from their participants) will tell you that, and help advance research at the same time.

Other research can give you more detailed information about your own health. For example trials focussing on genetics often ask to carry out a genetic screen on their participants; this is usually a simple process either using a blood sample or a cheek swab. You could find out if you’re at a high risk of obesity, which could help you turn down that slice of cheesecake you had your eye on for after dinner.

In other cases, trials like this may require more thought before taking part; you could find out you’re more at risk of cancer or neurodegenerative diseases like Alzheimer’s, all from giving a blood sample to a research study. It’s important to note here that genetic trials often offer counselling as well, finding out you are at risk of a certain disease can come as a shock – but it does allow you to implement lifestyle changes and hopefully reduce your risk over time.
5. To improve our NHS

We have all seen shocking headlines about how stretched the NHS budget is, and how likely it is to be stretched further as the UK population ages. Clinical research gives us the opportunity to make the medicines that are paid for by the NHS, and the healthcare procedures we use, more efficient. If we can learn how to make the NHS more efficient, the budget will go much further; magic!

For example, there are lots of different treatments available for diabetes – a growing problem in our society. Which one of these treatments works best though? Trials can answer that question for us. This doesn’t mean we’d stop giving out every other treatment though; each patient is different and certain drugs may work better for some people than others.

What we’d be able to do as a result of a trial like this, is find out which types of people are more likely to benefit from each treatment. Then we would be able to match people up with treatments that are more likely to work more quickly. By preventing the use of trial and error, patients would benefit as their disease would be under control more quickly, and we’d be cutting out waste to free up funds for areas of the health service – everyone’s a winner.

6. To help others

For those of us who are lucky enough to be in good health, we tend to take it for granted until the day we wake up sick. We then promise ourselves we’ll actively appreciate being well again. If you’re lucky enough to never wake up sick, there’s no doubt that you’ll experience someone close to you being given difficult news about their health. I can assure you that this will bring you swiftly back down to Earth.

As a healthy volunteer, clinical trials can give you the opportunity to help others. Healthy volunteers are often the group of people researchers find most difficult to bring in to their trials, mostly because when we’re healthy the problem of poor health seems like a distant problem that we’ll deal with if and when it happens to us. New drugs are tested in healthy people before people suffering from the target disease, this allows researchers to double check that the drug is safe. Without healthy volunteers trials would not be able to run.

So when someone close to you is unfortunate enough to receive an unwelcome diagnosis, don’t spend your time being angry at the world and frustrated because life just doesn’t seem fair; think about signing up to take part in a trial.

7. To take control of your own health

When people are given the news that they have a potentially life threatening disease they go through a mixture of emotions. In some cases they may feel helpless, they may ask ‘why me?’ and be frustrated over their perceived lack of control. Taking part in a clinical trial offers one option of regaining that control.

Being a trial participant does not guarantee that you’re going to be given a new or experimental treatment though – patients are randomly assigned to groups in a trial, so you may end up in the placebo group. A trial can still benefit you as you will be more closely monitored than you would throughout standard care.

Signing up for a clinical trial is not a decision that should be taken lightly; it’s a big decision to make and something that isn’t right for everyone. For others though, they can feel empowered by being a participant in a trial. 1 in 6 cancer patients takes part in a clinical trial in the UK each year, a figure that’s raised from 1 in 26 a decade ago. When asked why they decided to take part in a trial, the majority responded that they wanted to feel in control of their own healthcare, and a trial gave them that opportunity.

8. To advance science

Science is an industry full of unanswered questions, many of which can be answered by completing a clinical trial.

An example of a clinical trial may involve randomly assigning people to 2 groups; giving one group of people a drug you think might prevent heart disease each day, and giving the other group of people a placebo (in this case something that looks like a the drug but which has no effect). The result of the trial will give you information on whether that drug prevents heart disease or not. Other trials may not use placebos at all; in this example one group of people could be given the test drug, and the other group a drug which we already know prevents heart disease. Trials with this sort of design can prevent waste and help science and medical treatments advance – if the test drug prevented heart disease more effectively we could start using that instead of the one already in use.

Isn’t that cool? You could help to answer a huge and important scientific question, and you don’t even have to work in a lab.

Publication Explainer: The PRioRiTy Study

Today I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually 🙂 This is the second in my ‘Publication Explainer’ series, and there are at least another 2 that I already need to write, read the first one here. As I said in that post, these explainers are a place for me to answer 3 of the most common questions I’ve been asked by the people around me (usually my boyfriend, friends, or colleagues that haven’t been involved with the project).

This post focusses on the paper below: Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Read the full paper here.

Why prioritise research questions about recruitment to trials?

Research around recruitment strategies for randomised trials is super important – though it is the premise of my entire PhD project so I would say that. Recruitment to trials is difficult, and many trials (estimates differ but average around the 45-50% mark) fail to recruit enough participants to hit their targets. Targets are not just numbers plucked from thin air, they’re based on detailed calculations performed by trained Statisticians – target figures are designed to enable researchers and trialists to see real differences in the various arms of trials. If we don’t hit target, then results of the research could be vulnerable to something called a type 2 error – which is most simply explained by the image below; it’s a false negative, meaning that we could be telling people that an intervention is effective when it isn’t, or that it isn’t effective when it is.

Clearly, recruitment is as area that requires research, but because there is so much work to be done, we are at risk of being a bit everywhere (just to be clear, ‘being a bit everywhere’ is not the technical term for this…) when it comes to focussing and making substantial progress with improving the way we do research. Going through a formal prioritisation process for the squillions of research questions that surround the process of recruitment, will enable researchers to coordinate the research that they’re doing, plan more effectively, and work together to ensure that we are answering the questions that are most important to the various stakeholder groups involved.

How did the prioritisation process work?

The process of prioritisation that enabled this project to go ahead was a development with the James Lind Alliance – the JLA works with clinicians, patients and carers ensure that all voices are heard, and that prioritisation of research questions reflects the requirements of all of these groups. The James Lind Alliance works on the premise that:

  • addressing uncertainties about the effects of a treatment should become accepted as a routine part of clinical practice
  • patients, carers and clinicians should work together to agree which, among those uncertainties, matter most and deserve priority attention.

The prioritisation process begins with getting partners involved with the PRioRiTy project – this isn’t a project that can be done by one person!The stakeholders involved with this priority setting partnership were:

  • Members of the public who had been invited to participate in a randomised trial or participated in Trial Steering Committees (TSCs). They could be an individual or representing a patient organisation;
  • Front line clinical and research staff who were or had been involved in recruitment to randomised trials (e.g. postdoctoral researchers, clinicians, nurses, midwives, allied health professionals);
  • People who had established expertise in designing, conducting, analysing and reporting randomised trials (e.g. Principal Investigators/Chief Investigators);
  • People who are familiar with the trial methodology research landscape (e.g. funders, programme managers, network coordinators).

Once relevant stakeholders were identified, an initial survey with just 5 questions (below in Table 1 which is taken from the original paper) was developed and distributed to the stakeholders involved.

Responses were collated, organised, coded and analysed in order to generate a full list of research questions. This was a massive part of the work; 1,880 questions came from the 790 respondents to the initial survey. The figure below shows the process of whittling down this huge pile of questions to a manageable – and useful – top 20.

As you can see, this was an iterative process involving lots of people, views, questions – and work! I’ll just make it clear here – I was involved in a small part of this process, and the team working on the project was large; as I said before, with projects like this it’s important to involve people from lots of different backgrounds and with various levels/areas of expertise. The team was led by Prof Declan Devane and Dr Patricia Healy, both from NUI Galway, they kept the rest of us on track!

What next?

In terms of next steps for the team involved in the PRioRiTy project, it’s really important that we work to disseminate our results; after all, if no ones knows what the final list of prioritised questions is, then there was really no point in doing the project. So – with that in mind, here’s the final top 10!

To give these questions some context I wanted to talk through a few of them to go through my thoughts on what types of research may be required to answer them, and why they’re important.I’ll stick to the top 3 for this part:

Understanding how randomised trials can become part of routine care is, unsurprisingly, the top question from this entire project. Knowing how we can use clinical care pathways to ensure that patients are given the opportunity to take part in trials is a hugely important part of normalising trial recruitment, and spreading awareness of trials more generally. There is a tonne of research to be done in this area, and in my opinion, this question will need a diverse range of research angles and methods in order to answer it in a variety of ways.

This question is interesting – what information should trialists be giving to members of the public that are being invited to take part in trials? That seems like something we should have evidence for, but in actual fact we are working from hunches, experiences, and anecdote. I think this question will rightfully fuel a lot of research projects over the coming years, we need to be looking at what information potential participants want, as well as what they need form an ethical/regulatory stand point – at the moment I get the impression that we’re being driven by ethics committees and regulators, and we’re often putting in a lot of information that participants don’t want/need/find useful, because we feel it’s better to give them everything, rather than risk missing something out. I suspect that if we reduce the amount of information we provide, the understanding of that information would increase because participants are able to focus on specific pieces of information more effectively. I say that because I know that if I get a huge leaflet, I’m much more likely to avoid the entire thing because it looks overwhelming, or I don’t think I have time to get through all the information in front of me.

This question is one that I’ve been asked, and I myself have asked, numerous times over the course of my PhD. Public engagement and patient involvement are both areas of academic life that are getting increased focus; we know that involving patients and members of the public in our research can strengthen it, make the work we’re doing more relevant to the people that we’re doing it for, but could this involvement impact on recruitment rates too? I’m not sure, but I’m really interested to see the results of a few projects that are linked to this question that are currently ongoing – the PIRRIST study led by Dr Joanna Crocker is one I’ll be keeping an eye out for. The PIRRIST protocol was presented as a poster at a conference I went to in 2015, that information is published here if you’re interested in learning more.

Something to note

The appendix of the paper contains a full version of the table below, this provides details on the evidence that we already have available to us to help answer each of the top 10 questions. The top 3, which I’ve discussed above, have no evidence available – which really drives home the importance of a formal prioritisation process in highlighting where the gaps are in research evidence.

There is certainly a lot more work to be done on how we recruit participants into randomised trials – which is good for me as I want to stay in this field of research after my PhD, and hopefully get some of these questions answered over the course of my career!

Publication Explainer: Routinely Collected Data for Randomized Trials: Promises, Barriers, and Implications

This week I had a new publication come out – hoorah! Told you that all the effort I put towards my 2017 goals would pay off eventually. In another post later on in the year I’ll explain what my experiences have been like as a co-author on a publication, as well as what it’s like to be a first author, but today I want to use this post as a starting point for a new series on my blog. I’ll add to this ‘Publication Explainer’ series whenever I have a new publication out, and these posts will be a place for me to answer 3 of the most common questions I’ve been asked by people around me (here I mean colleagues that haven’t worked in this field, other scientists, non-scientists.. basically anyone who doesn’t work in the same research area as I do).

What is routinely collected data?

When we’re taking about health, routinely collected data (RCD) refers to data that has been collected during routine practice – basically, the stuff that your doctor adds to your medical record. This could be height, weight, blood type, blood pressure, blood levels, drug dosages, symptom frequency… the list goes on. As technology improves, RCD can also refer to things like number of steps, time spent sitting down, time spent standing etc – the sorts of things that a fitness tracker collects.

Why should we use routinely collected data in trials?

Routinely collected data could enable us to do trials better; whether that means more cheaply, with reduced patient burden, with less work for the trial team, more quickly, more environmentally friendly.. whatever ‘better’ means. This area of research is of particular interest to me because I’m trying to solve the problem of poor recruitment to trials. Recruiting volunteers to take part in trials is difficult, and if we can somehow design trials that are integrated into existing care pathways so that patients don’t have additional clinic visits to go to, then problems with recruitment could be solved much more quickly. In theory, we could design a trial that is fully integrated into routine care – meaning that when you visit your doctor and they collect data from you, that data can go straight to the trial team without the need for the patient to come in to the clinic on a separate occasion, which is what usually happens in trials.
This has been done before, the most well-known trial being the Salford Lung Study. This pioneering study involved over 2,800 consenting patients, supported by 80 GP practices and 130 pharmacies in Salford and the surrounding Greater Manchester area. You can read more about it here.

Ease isn’t the only reason to use RCD in trials. There is a huge field of research into what we call ‘pragmatic trials’.

Every trial sits somewhere on a spectrum from ‘explanatory’ to ‘pragmatic’. ‘Explanatory’ being used to describe trials that aim to evaluate the effectiveness of an intervention (a drug, a device, a type of surgery, or a lifestyle intervention like an exercise or diet change) in a well-defined and controlled setting. ‘Pragmatic’ being used to describe trials that aim to test the effectiveness of an intervention in routine practice – i.e. some people might not take their tablets as directed, they’ll likely skip an exercise every now and again, they might forget to pick prescriptions up or get their doses mixed up – these trials reflect real life. The more pragmatic a trial is, the more likely that the results of that trial will then translate into the real world if/when the intervention is rolled out for public use. Using routinely collected data could help to ensure that trials are more pragmatic.

Why aren’t we already using routinely collected data in trials?

The idea of using routinely collected data in trials sounds perfect, right? Patients won’t have to go to clinic visits, trials will recruit more easily, therefore they’ll be completed faster and more cheaply, trials will be more pragmatic – why aren’t we already using RCD in trials?

If only it were that simple! Just because data are collected, doesn’t mean that researchers are able to access it, never mind access it in a useful format at the time that they need it. There are lots of concerns about using RCD in trials as standard, but these issues are likely to be overcome at some point in the future (as for time, that’s the big unknown – it could be 50 years, could be longer!). This is an exciting field of research, and one that I’ll be keeping a close eye on over the next few years.

BioMedCentral as a publishing group is open-access meaning that their publications are not hidden behind paywalls, if you’d like to read the full paper you can find it here.

I also wanted to flag up a blog post that Lars and Kim wrote to go along with the publication, essentially it’s a more condensed, relaxed and easy to understand version of the paper – you can read that here.

Health Advice Overload: What Should We Believe?

This article, written by Francesca Baker, featured in the September issue of Balance magazine. I spoke to Francesca whilst she was writing this piece, and she has included a few quotes from me, so I’ve republished it here with permission. Make sure you take a look at her blog and Twitter account to keep track of future articles from Francesca – she writes across a diverse range of topics.


The sheer volume of data available when trying to decide what’s good and bad for your health is overwhelming. So how do you know what to believe?

This is a world with easy access to a huge amount of information. Just about everything you could possibly want to know is available at the touch of a button, from what to eat or how much exercise to do, to the best way to raise a child, where to invest your money or who to vote for.

You want to make informed decisions and you’ve never had more information at your fingertips. Trouble is, it can actually make life really confusing.

If you’ve ever been unclear about whether butter is actually good or bad for you, tried to ascertain if the antioxidants in wine outweigh the hangovers, or ‘hacked’ your sleep to achieve a solid eight hours only to discover that seven hours is, in fact, what you should be aiming for, you’re not alone.

A 2014 study in the Journal of Health Communication: International Perspectives examined the effects of conflicting media information about fish, coffee, red wine and supplements.

The report raised ‘concern that exposure to contradictory health information may have adverse effects on cognition and behaviours.’ The more information people were exposed to, the higher the level of confusion they reported, which led them to making the wrong decisions.

Not to mention that evidence changes all the time, as more scientific discoveries are made. It’s difficult to believe that smoking was once deemed ‘healthy’ and 1950s adverts for cigarettes featured doctors encouraging the public to smoke.

In fact, in 1980, there were only seven dietary guidelines which Americans were encouraged to follow; by 2005, that had swelled to more than 40.

It’s not about quantity of information – the abundance of evidence can be empowering – but much depends on our ability to scrutinise its quality and how useful it is.

THE RANKING OF EVIDENCE

According to scientists Mark Petticrew and Helen Roberts in a study published in the BMJ, there is a ‘hierarchy of evidence’. They outline seven different levels of study, ranking them based on effectiveness, process, salience, safety, acceptability, cost effectiveness, appropriateness and satisfaction. At the top – the most rigorous and accurate – are systemic reviews and randomised control trials, followed by cohort studies, observational studies and surveys through to testimonials and case studies.

You see, it’s not only the type of evidence that matters, but where it comes from.

Dietary guidelines are drawn up by governments who also want to keep food manufacturers in business. Studies aren’t cheap to run and are often funded by parties with a vested interest in a positive outcome for their products.

The American Diabetes Association, for example, is one of many health groups which get funding from fizzy drink manufacturers – Time magazine reported last year that between them, Coca Cola and Pepsi gave money to 96 health groups in the US.

A study of 206 pieces of research that looked at the ‘Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles’ found those sponsored by a food or drink manufacturer were four to eight times more likely to show positive health effects from consuming those products.

Often health claims or scientific breakthroughs are reported in the media without context. Heidi Gardner, PhD researcher and science communicator, believes ‘poor quality science is easily disseminated broadly and good quality science gets minimal coverage because researchers are open about the limitations of the work they’ve done.’ We want conclusive answers, and for science to provide them, but ‘that just isn’t possible with decent quality research – the best we get is ‘yes or no for now’.’

Helen West and Rosie Saunt from the Rooted Project, a scheme they co-founded when they ‘became tired of the nutri-b****cks filling our social media feeds’ stress the importance of looking at the whole body of evidence, rather than only that which supports your personal belief. They see big problems in the health and wellness industry, where qualifications are not regulated in certain fields, but recognise the public are starting to understand the importance of evidence-based nutrition and are ‘demanding credibility from the industry’.

THE SIGNAL OR THE NOISE

Rob Briner is scientific director at The Center for Evidence-Based Management. His advice is to think widely and deeply. ‘It is essential to get evidence from a range of different sources… because using information from just one source means it is more likely we will be using information that is either limited or biased, or both. The second thing we need to remember is to judge the quality of the information or evidence we obtain.’

It has never been easier to share your thoughts with the world via the internet. Technology means anyone can have a voice. While there are enormous advantages to this, it’s difficult to separate real expertise or verifiable news from opinion and idea – to ‘hear the signal through the noise’ as Rob puts it. We’re also human and tend to believe things because other people do, or experience confirmation bias where we tend to search for information consistent with beliefs we already hold.

Dr Joseph Reddington is director of EqualityTime, a charity using critical thinking to solve social problems. He’s also active in London’s ‘Quantified Self’ movement which is based on daily self-tracking and says technology offers a chance to become your own expert. ‘Being able to fact check in real time empowers normal people with just enough truth to fight back,’ he says.

A yoga teacher specialising in prenatal and baby yoga, Hayley Slatter aims to help individuals find their own sense of wellbeing. Even with experience as a physio, a Masters in neuropsychology and additional qualifications in yoga and pilates, she finds the field overwhelming. The number of yoga teachers demonstrates the versatility of a yoga practice. But when you add endless articles, so-called expert bloggers and what Hayley calls ‘Instayogis’, showing the benefits of particular poses, classes and even nutrition, it’s difficult to know which practice is right for you. ‘I believe the common theme through all these yoga types is that a true practice requires a degree of self-awareness,’ she says.

Heidi Gardner agrees, ‘people tend to tune out of their own bodies in favour of trying to find evidence for what they should or shouldn’t be doing.’ She has been working with a nutritionist since ‘feeling overwhelmed’ with all the healthy living ‘evidence’ she was faced with. ‘I was relying on claims I’d seen to tell me what was healthy,’ she says. Stopping soaking up all the ‘evidence’ has made her happier and more relaxed – and probably healthier, too.

So, how do we gain that self-awareness? We might have to accept that there isn’t one. We’re all human and after we’ve read widely and deeply, asked critical questions and considered all the evidence, sometimes the only thing to do is take a deep breath and jump in to what feels right for you.

FIND YOUR BALANCE:

HOW YOU CAN APPLY EVIDENCE TO YOUR OWN LIFE

Search for the best available evidence. As well as a degree of quantity, you need quality. Who wrote it? What do they have to gain? What is their experience?

Play the ‘why’ game and approach what you read and hear with a dose of healthy scepticism. ‘Asking “why?” repeatedly and focusing on making better-informed and not perfect decisions’ is important says Rob Briner, The Center for Evidence-Based Management.

IS IT IMPORTANT?

Use Pettigrew and Roberts’ idea of salience, or how important something is. Basically, does what you’re investigating even matter? And why? Do we actually care about taking 10,000 steps a day, or whether we have 35 or 40 grams of protein? In the grand scheme of our lives, how much does it really matter?

THREE WISE THOUGHTS 

The Rooted Project has three questions to ask:

‘Is the claim based on a person’s story, or a scientific study?’ If it’s an anecdote, you can be pretty certain it’s not a fact and probably not applicable to the whole population.

‘Is the newspaper headline referring to one study or multiple?’ Single studies do not change public health advice.

‘Is it a human or animal study?’ You are not a mouse, rat or monkey. You can’t extrapolate data from animals to humans.

LISTEN TO YOUR HEART

Remember that intuition is itself a form of evidence. If your gut is telling you something, then you should listen to it. The more practiced we become in doing this, the more we will learn to trust our own instincts and develop self-awareness.

Why Clinical Trials Should Be At The Forefront of Public Science Knowledge

I originally wrote this post as a guest feature on ‘An Anxious Scientist‘. The piece was originally published at the beginning of August, and I’ve republished it here with permission from Rebecca who runs An Anxious Scientist. Make sure you take a look at her blog for brilliant posts explaining complex science concepts in engaging ways, showcasing scientists in all fields, and of course some of Rebecca’s own PhD experiences too.


Public engagement with science is not a new concept, but with the rise in social media usage and pressure on scientists to prove the impact of their work, the world of science communication is advancing at a rapid rate. Many early career researchers now contribute to online blogs, Instagram and Twitter profiles with the aim of disseminating their research, breaking down stereotypes, and ultimately getting the public excited about science. The opportunities that science communication opens up for both academics and public audiences is huge. It’s difficult to see a downside; academics work to improve the way they communicate, and the public finds out more about the research that’s going on around, or in some cases with them.

The diversity of fields covered by science communicators is vast; but is there room for everyone?

I’ll say up front that I think good quality science communication from any field of research is a good thing; but as a clinical trials methodologist, clinical trials in the public sphere of scientific knowledge hold a different level of importance for me. That’s not to say that other types of science are not important, just that trials are a topic I really feel the public could benefit from knowing about.

My work focusses on improving the way we do clinical trials – in particular, how we recruit participants into clinical trials in an efficient way. Efficiency here could mean lots of things; cheaper, faster, less patient burden, less administrative work, etc – I’m interested in making the process better, whatever ‘better’ means.

Each trial has statisticians that process the huge amount of data that comes from trials, but way before results start coming in, these statisticians are charged with the task of calculating how many people need to take part in the trial for the results to be robust. This is important because if trials recruit too few participants then the results of those trials could actually be showing us unreliable data. Estimates currently show that ~45% of trials globally don’t recruit enough people.

Clinical trials are the types of studies that we want our healthcare system to be based on. Trials are able to differentiate between an intervention causing an outcome, and an intervention being correlated with an outcome. In simple terms, they can answer questions like ‘does taking a paracetamol get rid of my headache, or would my headache have disappeared without it?’

Understanding the strengths and limitations of trials, and being able to unravel what features differentiate a reliable trial from an unreliable one, would empower the public.

Take the example of the Alzheimer’s drug LTMX that caused these headlines in July 2016:

With those headlines in mind, take a look at these articles that are about that exact same drug, LMTX:

In this case, newspapers with high readership figures and easy access to the public told of a drug that would halt Alzheimer’s disease – and the public could be forgiven for thinking that the problem of Alzheimer’s was now solved. Scientific media, and news outlets with smaller readerships provided a more balanced view of the trial that tested LMTX.

Surely this means newspapers should be reporting better, rather than putting the onus on the public?

News outlets like The Sun, The Daily Mail and The Times are not scientific experts; their reporting on health research could be discussed in another article entirely! What I do think is important, is that the public feel equipped to critique these sensationalised pieces in order to get to the root of the story – the facts.

All of the articles state that 891 people were enrolled in the trial, the majority were also taking treatments that have already been approved to help relieve Alzheimer’s symptoms. 15% (144) of the 891 people were only taking the trial drug (LMTX), or a placebo. In this group the researchers noticed a difference. All of the articles provide that information – it’s the headline that is swaying the public’s thoughts on the results.

Given what I mentioned earlier about the importance of recruiting the correct number of participants, the results of this work are immediately put in doubt. If the trial’s statisticians calculated that 891 people were needed to find a clinical difference between patients taking the experimental drug and those taking other drugs, then why does it matter that a difference was found in a group of 144 patients? Put bluntly, it doesn’t. These trial results do not offer a definitive answer to the question of whether LMTX could prevent cognitive decline in Alzheimer’s patients.

As we can’t control what headlines are plastered over the front page, it’s important that we empower, educate, and answer questions from the public about trials so that they can make these judgements themselves.

So, what’s the solution? Whilst the science communication world advances, I feel like we are focussing too much on the discoveries themselves, over the methods we use to discover. The addition of a level of transparency and openness about the flaws in scientific methods would go further to empower the public. It would begin to break down barriers years of science has built between scientists and the public – science may have the answers, but we need to be open and honest about the methods we use to get those answers.

If you’re a science communicator, why not challenge yourself to explain the limitations of your work rather than simply strengths?

Are Clinical Trials a Waste of Time?

I wrote this article for the 23rd issue of Lateral Magazine. The piece was originally published at the beginning of the month, and I’ve republished it here under Creative Commons licence 4.0. Hope you enjoy!


Changing how clinical trials are designed and reported could save billions of dollars.

Every year, we spend $200 billion globally on health and medical research, more than the annual GDP of New Zealand. Yet up to 85% of this money is wasted on research that asks the wrong questions, is badly designed, not published or poorly reported. In addition, a 2005 study by John Ioannadis showed that claimed research findings are more likely to be false than true — that is, they will be proven incorrect when better quality research is conducted later down the line.

So is clinical research a waste of time, and therefore money? As a researcher myself, I’m inclined, as you might expect, to say no. Let me explain why clinical trials are so expensive, and how we can make these expenses count.

Clinical trials are a necessary step for approving new medical treatments. Hal Gatewood/Unsplash (CC0 1.0)

Clinical trials are affectionately termed the ‘gold standard’ method of evaluation in a healthcare setting, and were necessary for marketing approval for everything from the paracetamol you take to ease your hangover to treatments for cancer and Alzheimer’s disease. But they also require a huge amount of resources. Trials can take years to complete and often involve thousands of people from various countries to ensure that research questions are answered satisfactorily.At the core of high-quality medical research are randomised controlled trials. In these trials, participants are randomly allocated to one of two or more treatment groups (referred to as arms) that the trial is looking at. Most people think of trials involving drugs, but interventions might also include surgical procedures, medical devices, and lifestyle interventions such as exercise or diet modification.  The randomisation of participants ensures that any outside biases, such as sex, age, or educational status, are distributed throughout the treatment groups, effectively negating the bias these outside influences may have.

Randomised trials must also be ‘controlled’; that is, one of the treatment arms acts as a control group to which the treatments are compared. In most cases, this control group will be given the standard treatment option for their condition or disease. This allows us to see if the new treatment we’re testing is better than what is already available to patients.

Clinical trials are the ‘gold standard’ of evaluating healthcare outcomes. Sanofi Pasteur/Flickr (CC BY-NC-ND 2.0)

In a recent study, researchers looked at trials funded by Australia’s National Health and Medical Research Council between 2008 and 2010. These 77 studies required a total of A$59 million in public funding. Most people would consider this an acceptable price to pay for improved survival rates, but what if most of that $59 million was wasted due to correctable problems?The estimate that 85% of all health research is being avoidably “wasted” is shocking. As an optimist I’m looking to the ‘avoidably’ part of that sentence; we have a lot of work to do, but it’s all work ready to be done, rather than issues we hope to solve at some point in the distant future.

The problem of research waste has been a central focus of the health services research and evidence-based medicine communities since the publication of Ioannidis’ paper “Why most published research findings are false”, and there is a clear push to prevent research being wasted.

As a PhD student in the Health Services Research Unit at the University of Aberdeen, I am working to improve the efficiency of trials. There is a bizarre contradiction in the trials world; we do trials to generate good quality evidence, but the way we carry out certain aspects of trials is not remotely evidence-based.

Here’s an example. Recruiting participants for trials is a notoriously difficult process that wastes time, effort and money, but there is limited evidence that the methods we currently use to improve recruitment are particularly efficient. For example, many trial teams approach patients via existing healthcare infrastructure, but these systems are already overstretched without the addition of research tasks, and it may be that there’s a better way to find patients without the need to involve physicians. If recruitment fails to successfully reach the trial’s target, the results of the trial as a whole can be at risk.

Many countries have introduced publicly accessible websites that allow people to search for trials currently in the process of recruiting. Patients can find trials that are relevant to their disease state, meaning the healthcare system does not need to be directly involved with recruitment. As yet we don’t have evidence to support or refute the effectiveness of these websites, so they are often used in conjunction with other recruitment strategies.

Finding suitable subjects for clinical trials is an inefficient process, but there are avenues from improvement. Queen’s University/Flickr (CC BY-NC-ND 2.)

Other research groups are working to alleviate research waste by tackling poor reporting of experimental methods. “Most of us have probably tried to recreate a meal we enjoyed in a restaurant,” wrote epidemiologist Tammy Hoffmann in a recent article. “But would you attempt it without a recipe? And if you have to guess most of the ingredients, how confident would you be about the end result?”

It makes sense; for health research to be picked up and implemented in a clinical setting, we need to give clinicians the full recipe. Interventions used in trials might involve drugs or non-drug treatments like exercise, psychosocial or dietary advice, and giving partial details is a sure-fire way to ensure research doesn’t make its way to patients. Crucial details, such as the materials needed to carry out interventions, are lacking in up to 60% of trials of non-drug interventions, and the problem occurs in drug studies, too. These articles focus on published trial reports, and don’t discriminate against public- or industry-funded trials; full recipes are lacking across both of these research areas.

Research is an imperfect process, and with research funds getting increasingly scarce worldwide, it’s important that we make a concerted effort to reduce the intrinsic inefficiency of trials. At the very minimum, we must work to ensure trial results are published in a timely manner.

On a wider, and perhaps more optimistic scale, it’s clear that researchers need to take responsibility for disseminating results of the projects they are involved in. It’s no longer acceptable for results to be presented only at specialist conferences that few clinicians are privy to. Trials are conducted with the explicit aim of improving human health, and it’s down to both researchers to ensure results are circulated and the public to hold researchers accountable.

Edited by Andrew Katsis and Sara Paradowski