Session Nine: Randomisation

This episode is a live recording of the ninth session of #UnblindingResearch held in DREEAM 21st November 2018. The group work has been removed for the sake of brevity.

This session of #UnblindingResearch looks at randomisation and the different techniques for overcoming selection bias.  Here are the slides for this session (p cubed of course), you can move between slides by clicking on the side of the picture or using the arrow keys.

Here is the #TakeVisually for this session:

Which trials use randomisation and why?

Randomised controlled trials (RCT) are used to test a new treatment against the current standard or between two or more existing treatments to see which works best. They consist of at least two groups; one who receives the new treatment and a control group who receive the current treatment or placebo. Randomisation is used to overcome selection bias. Essentially it means that a participant’s allocation is not prejudiced. Biased allocation can effect the outcomes of a trial. Say you only recruited fit younger participants to receive your new treatment and older, unwell patients received placebo then it’s more likely your trial will find your drug works. A compromised randomisation study is actually worse than an explicitly unrandomised study as at least the latter has to be open about its lack of randomisation and what potential biases it might have. Randomisation must make sure that each group of participants in the trial are as similar as possible apart from their treatment allocation. This means that whatever differences in outcomes are seen are due to the treatment received.

Simple randomisation

Simple randomisation is allocation based on a single sequence of random assignments such as tossing a coin. Other random events such as picking a card or rolling a dice can be used. Random number generators are another type of simple randomisation. Simple randomisation works well in large groups of subjects and is easy to use. However, in smaller groups it is more likely to produce unequal group sizes and so could be problematic.

Stratified randomisation

Stratified randomisation is used if you’ve identified baseline characteristics (co-variates) which might affect your trial outcome. For example you may be studying a new intervention to shorten post-operative rehabilitation. The age of your participants is going to influence rehabilitation anyway. You might therefore perform stratified randomisation; first sorting the patients into age blocks before then performing simple randomisation into either intervention or placebo. This is more difficult for larger sample sizes and ideally should be performed right at the beginning of the trial with all participants signed up and their characteristics known. However, in practice subjects are usually recruited and randomised one at a time and so performing stratified randomisation would be very difficult.

Randomisation Pics.008.jpeg

Cluster randomisation

Clustered randomisation involves randomising groups (clusters) of participants to receive either control or intervention. This technique isn’t used in drug interventions but instead is used for interventions involving a large group such as an education programme which is delivered to the intervention group but not the control group. You can imagine it wouldn’t be feasible to deliver an educational programme to a hundred people individually but would be to one group at a time. Each cluster should be representative of the overall population. As a cluster gets bigger the power and precision of the study goes down. The intracluster correlation coefficient (ICC) measures the degree to which observations from the participants in a cluster are correlated. It is measured from 0-1. The higher the ICC the more close the values from a cluster are. The lower the ICC the the more difference there is between values from the same cluster.

1:1, 1:2, 1:3 randomisation

Classic randomisation is 1:1 (ie one participant to one group, one to another) but sometimes randomisation is unequal and can be 1:2 or 1:3. There are a number of reasons for this. One is cost; if one arm of the trial is cheaper then the other it can make sense to recruit more to the cheaper arm, this is rare though. Much more likely is if you are trying to assess the safety or efficacy of different dosing regimes. Some trials may involve a new technique with a learning curve for the practitioner (say new equipment for a doctor to use). If you recruit more to the intervention arm you’d overcome this learning curve effect on your trial. In some trials you may anticipate high drop out and so aim to mediate this with unequal randomisation. This doesn’t affect intention to treat (ITT) analysis. You may be worried about recruitment and believe that if a participant has a 3x increased chance of being in the intervention arm than placebo you may want a 1:3 randomisation. However, if you’re really worried about how much benefit the new treatment is against your control you should really look at changing your trial. This is the principle of equipoise. Unequal randomisation affects sample size. For the same power as a 1:1 trail a 1:2 trials needs 12% more participants and a 1:3 needs 33% more.

Block randomisation

Block randomisation on the other hand aims to ensure equal participants to each group. Say you have two trial arms A and B. If you used blocks of 4 to recruit it might look like this:

Block 1: ABAB

Block 2: BAAB

Block 3: ABBA and so on

Randomisation Pics.007.jpeg

Notice after every block of four two participants have gone to A and two to B. This ensures equal recruitment.

Next box, sealed envelope, telephone/web randomisation

With sealed envelope randomisation each research team is given a selection of envelopes which contain the allocation. After recruitment the envelope is opened and that allocation offered to the patient. This is open to compromise however, the envelope could be tampered with and even made transparent if held up against the light!

Another option is distance randomisation, either over the telephone or a website. This uses a third party service, of which there are many, who logs the patient details and then allocates the participant.

The research team may use next box on the shelf recruitment. You’re provided with a selection of boxes and you literally pick the next box on the shelf each time you recruit. Each box will have a separate code which you’ll log but otherwise you won’t know if you were giving a placebo or the intervention.

Our next session is on 19th December and is on Blinding

Untitled.001.jpeg

Session Four: Types of trial

This episode is a live recording of the fourth session of #UnblindingResearch held in DREEAM 16th May 2018.  The group work has been removed for the sake of brevity.  

This session of #UnblindingResearch looks at the different types of research trial and how our outcome decides the type of trial used.  Here are the slides for this session (p cubed of course), you can move between slides by clicking on the side of the picture or using the arrow keys.

The session began looking back on our previous sessions, first the introduction to the research method, then how we formulate a research question using PICO (Research, THE search, WE search) as well as get funding and then the last session covering the 'tale of two cities' of good clinical practice and ethics (the only way is ethics). 

This session looked to prove that research doesn't have to be a trial, it can be sweet!  There was lots of group work based on scenarios with research nurses and sweets* to explore the different types of trial:

Research nurses like sweets. You want to see the impact of eating sweets on the dental health of new research nurses.  (LONGITUDINAL)

Longitudinal trials are an observational method with data collected over time.  They can be retrospective or prospective.  This obviously takes time and you can imagine nurses leaving the department or taking time off for illness/pregnancy and so being lost to follow up.  This is a large problem for longitudinal studies.  

You want to investigate the dental health of nurses across different departments within the trust and the factors behind it.  (COHORT)

Cohort studies are a particular type of longitudinal study.  A cohort is a group of people who share a particular characteristic. They observe large groups of individuals, recording their exposure to certain risk factors to find clues as to possible causes of disease. They can be retrospective or prospective.  

You believe that changing research nurses’ snacks to fruit rather than their usual sweets will improve their dental health. (INTERVENTIONAL)

Observational studies have no controls over variables they simply observe.  Interventional studies change one variable and compare or use a control.  

You want to know about the snack choices of research nurses in a department and what influences them. (CASE STUDY)

Case studies are very deep in analysis but narrow in breadth.  They involve a very close and detailed analysis of a particular concept such as the decision making of a small number of individuals.  It is not the same as a case report.

Professor Haribo has published a new diagnostic test to tell nurses how likely they are to get obese from eating sweets.  It is believed that it could predict if a nurse should eat sweets by scoring if a nurse will get obese or if they won’t.  Traditionally we have used diet plans to predict obesity from eating sweets. You want to investigate the new test. (DIAGNOSTIC ACCURACY TEST)

Diagnostic accuracy tests are all about how a test correctly identifies or rules out a particular disease and how this can inform subsequent decisions.  A test needs to correctly identify a disease in those who have it (true positive) which is its sensitivity and correctly rule it out in those who don't have it (true negative) which is its specificity.  If we are evaluating a new test this is known as the index test and it is compared against the reference standard.  The D Dimer in PE is a classic example of a test with high sensitivity but low specificity and how we need to know this during clinical decision making.  Here is a good article from the BMJ on diagnostic accuracy studies.  We will look more at sensitivity and specificity in future sessions.  

You have a theory that Tesco own brand wine gums will improve research nurse productivity compared to Maynard wine gums.  You know that research nurses like Maynard’s wine gums** and so want to design a study to get over this bias.  (RANDOMISED CONTROL TRIAL)

RCTs are the gold standard of research trial.  It is designed through random allocation of participants to receive the new treatment or to receive standard treatment/placebo to overcome inherent biases.  We discussed ways of randomising and blinding and the limitations of this.  

For instance it would be easy to be blinded about these two syringes as to which one contains the real medicine and which one contains the placebo as they look the same and can be delivered without the doctor or nurse knowing which is which: 

IMG_0126.jpg

However as with our sweets if there is a difference in appearance, smells or taste however then true blinding is much more difficult and we may need unblinded research staff.  

*These scenarios were written by Lucy Ryan the DREEAM Research team manager and she openly admits to loving sweets

** We at DREEAM have nothing against non-Maynard wine gums we just prefer those from Maynard.  We have no financial involvement in Maynard but would be willing to listen to any offer of free sweets

Session Two: Formulating research questions and designing a project

This episode is a live recording of the second session of #UnblindingResearch held in DREEAM 21st March 2018.  The group work has been removed for the sake of brevity.  Here are the slides for this session (p cubed of course), you can move between slides by clicking on the side of the picture or using the arrow keys.

Research. THE search.  WE search.

This talk focuses on the 'PICO' model:

Population and Problem

Intervention

Control/Comparison

Outcome

This model is useful for your literature search (P+I) and all together (P+I+C+O) makes up your research question.  It can also be used to interpret a paper you're reading.  Also mentioned are sources of funding and support including:

Research Design Service

National Institute of Health Research 

Outcomes and methodology are touched on; this is will be further explored in later sessions.

Remember the next session:

GCP and Ethics 18th April. 

Session One: Introduction to audit, quality improvement and research

This episode is a live recording of the first session of #UnblindingResearch held in DREEAM 21st February 2018.  The group work parts have been removed for brevity sake.  Here are the slides for this session (p cubed of course), you can move between slides by clicking on the side of the picture or using the arrow keys.

The group work involved sorting a few terms under the headings of Audit, Quality Improvement and Research.

The clinical research approach is briefly covered, discussing literature searches and how primary outcomes might affect the methodology as well as what secondary outcomes are.  Audit is discussed with emphasis on the cyclical approach.  Finally Quality Improvement Projects are covered, how they are linked to audit, the PDSA format (Plan, Do, Study, Act) and how it can be embedded across healthcare.  

Here is the link mentioned to more information on QIPs from the NHS.

Here is the BMJ article mentioned covering how to set up an audit.  

Don't forget the next session: 'Formulating research questions and designing a project' is in DREEAM on 21st March 2018 with the podcast being released shortly after.