Read the latest HealthWatch newsletter:  Newsletter 115, Spring 2021


HealthWatch promotes the use of clinical trials to provide evidence that a treatment works, and to identify the risks as well as the benefits of the treatment.

Some people say that if a patient feels better after taking a treatment, nothing else matters. But that does not mean the treatment is effective. The patient might have improved without any treatment, or there might have been more improvement with a simpler, cheaper or safer treatment. Another patient might have got worse with the treatment.

One way of assessing whether treatments work is retrospective - in other words, comparing the outcomes for large numbers of people who have had different treatments or no treatment at all.

But more usually the only way to find out if a treatment really works is prospective - to try it on lots of people and compare their outcomes with lots of other people who are not given the treatment. Proper clinical trials need to be done systematically, so as to minimise the effects of chance and bias.

Other say that trials of truly holistic treatment are impossible: there can never be a control patient with whom the result can be compared because each patient is different, and the treatment is tailored to the individual. But the best clinical trials are big enough to minimise individual differences between patients. In any case every health worker should practise holistic medicine: treatments should always be chosen to suit the whole patient in their particular circumstances and environment.

Some people argue that clinical trials are just too expensive, especially for alternative medicine. But the onus of providing evidence of efficacy - and of safety - is on those who promote a treatment. In any case complementary medicine is a multi-billion-dollar global industry. It does sometimes do trials, but rarely ones which have scientific integrity. It is no coincidence that while almost all proper pharmacological research results in compounds being abandoned because the results are negative, almost all research done by complementary practitioners comes up with the results they want.

We believe that due to a lack of proper testing, patients are offered treatments that are less effective, less safe and more expensive than they need to be, both in conventional and alternative medicine. Treatment benefits and harms need to be established with valid clinical trials.

Clinical trials should be designed after a systematic review of the evidence base, to avoid research waste and ensure that the research question has not already been answered by existing evidence

Valid clinical trials are fair trials. There are many things to think about when designing or reading the results of a clinical trial. If these are not dealt with properly, the trial will not be a fair comparison, and will provide misleading information. Here are 10 points that HealthWatch is particularly interested in:

Ten design issues


It is illegal to conduct any research involving human subjects without the prior approval of a Research Ethics Committee. It is the duty of these committees to check that the research is properly designed, and that previous research on the topic to be studied has been adequately considered.

Any conflict of interest (e.g., the research is sponsored by an organisation with a commercial interest in treatment being tested or the researchers have a financial interest such as company shares) must be declared. Sources of funding must be made clear.

Any possible harm to trial participants must be minimised.

Patients must not be pressured into taking part. They must never be told they will get no treatment, or substandard treatment, unless they take part in a clinical trial. They must not be offered the trial treatment for free but told they would have to pay for standard treatment. They must be told about all the risks involved in taking the trial treatment and properly informed of other treatments that they could have.


Volunteers taking part in clinical trials must be given a clear description of what the trial involves and what other treatments might be, so they are able to give fully informed consent that they are willing to participate. They need to be told that they may be allocated to different forms of treatment, sham treatment, or no treatment. They must be aware that they are free to opt out of the trial at any time, without giving any reason and without suffering any detriment. This consent to participate must be recorded.

Inclusion / Exclusion Criteria

Since clinical trials need to make fair comparisons, not everyone will be eligible to take part. Some may be too ill or have other conditions which might confound the results. Others with a mild version of the condition might be excluded because the risk of side effects might outweigh the benefits. Some treatments are sex-specific, age-specific or ethnicity-specific. Women under 45 might be included in a trial of a contraceptive, but not women who have hormone imbalances which might be made worse by the new treatment, and not girls under 16 as their hormonal development has not yet matured.

All clinical trials should clearly state the inclusion and exclusion criteria – and of course the results of the clinical trial can only properly apply to the types of patient included.

Control Group

Taking part in a clinical trial is different from ordinary every-day treatments in some respects. Participants tend to get extra attention from trial investigators, extra forms to fill in, special tablets or other treatments to take and more visits for the trial assessments. These special arrangements may themselves influence trial participants in a way which has nothing to do with the actual treatment in the trial. We need to allow for these effects. This is why there needs to be a ‘control group’ of participants who go through all the same procedures but do not take the trial treatment.

Most medical problems go through good phases and bad phases as symptoms vary. Because of these natural ups and downs in symptoms, some of these patients will improve whether they take part in the trial or not. This is called regression to the mean.

Without a control group it would be impossible to say whether the result was just from taking part in the trial, was due to regression to the mean, or was really due to the treatment received in the trial.


The only way to make sure the control group and the treatment group are made up of the same mixture of participants is to randomly assign each participant to one group or the other. This could be done by the toss of a coin of by a computer-generated random list. Without randomisation there is a serious risk of conscious or unconscious bias in the selection, and of contamination by other factors such as chance.

It is essential that participants are not randomly assigned until after they have agreed to participate, otherwise the randomisation allocation might influence who participates and who does not.

Investigator Blinding

Investigators who design and take part in a clinical trial do so because they hope that the trial treatment will be better for patients than the alternative treatment. Thus they are inevitably biased, more likely to notice evidence in favour of their ideas than evidence against. Many studies have shown that this can change the way investigators assess the response to a treatment. What’s more, the enthusiasm or hesitancy of investigators can be noticed by patients, and it has been shown that this can influence the patient’s own assessment of their treatment response. The best way to avoid this is to design the trial so that researchers do not know which patients receive which treatments – this is known as being ‘blind’ to treatment allocation. Where possible, those who conduct the trial and those who asses the outcomes are all unaware of who had which treatment.

Participant Blinding

Volunteers who take part in clinical trials do so because they hope the trial treatment will turn out to be better than the alternative treatment. Many studies have shown that this can change the way participants assess treatment response, and so introduce bias. The best way to avoid this is to design the trial so that the participants are also ‘blind’ to which treatment they are receiving. A trial in which the investigators and the participants are ‘blind’ to treatment allocation is called a ‘double-blind’ trial.

For blinding to be effective, it must be difficult to tell the difference between the treatments during the trial. If the trial is comparing one treatment - say a tablet - with no treatment, then the no-treatment participants can be given an identical looking (and tasting and feeling) tablet which does not contain any of the test treatment. This is called a placebo, and the trial is called a placebo-controlled trial.

Sometimes two different treatments are compared, and the two cannot be made to look and feel the same. For example, and injection in the shoulder might be compared with ultrasound treatment of the shoulder. Here two placebos might be used – an injection of water and an ultrasound machine which has been ‘detuned’ so that it does not deliver the treatment properly but gives the impression that it has. All participants receive both treatments – one of which is one of the placebos and the other a true treatment. Even sham surgery is undertaken, sometimes leaving small scars. This emphasises why participants need to be properly informed about what they will experience. (In fact surgical trials have proved particularly productive, showing some procedures, such as routine arthroscopies, are unwarranted.)

Double-blind placebo-controlled trials have been shown to have the least likelihood of providing biased results.

Sample Size

The sample size is the number of people included in the trial. Because different participants may respond slightly differently to the trial treatment (or the control treatment) there will be a natural spread of results in each group and comparing them will require some statistical tests. There must be enough patients in the trial to make these tests valid.

If does not take many participants to show a result if the difference between two treatment groups turns out to be very large. But if the difference is small, it will take many more patients for robust differences to show up against the natural spread of treatment responses. And if the difference is very small indeed, it will take many, many more patients for it to show.

If too few patients are included to show the difference, the trial be inconclusive - but it will look as if it has shown no difference. It is possible to work out the sample size needed to be confident of detecting a particular benefit of treatment. All trials should say in advance the size of benefit they are seeking, and so be able to include a valid statistical justification as to why the sample size has been chosen.


Investigators know if the treatment being tested has worked by measuring the ‘endpoint’ or outcome of the clinical trial. A trial of a treatment to lower blood pressure might have blood pressure at the end of 3 months as an endpoint. However, another trial of the same treatment might have deaths from heart attacks over the next 5 years as the endpoint. A third trial might have death from all causes over the next 5 years as the endpoint.

In another type of trial, the amount of pain in the shoulder on average over the treatment period might be the endpoint. But the investigators might also measure the amount of movement in the shoulder at the end of the trial, and how satisfied the patient is, and whether they can lift heavy weights, and whether they can do their housework more easily.

It is essential to state in advance which endpoints are being measured and why. This is to prevent cherry picking from the natural spread of results to show an outcome favoured by the researchers. There have been notorious cases where pharmaceutical companies have flattered results by publishing premature results, thereby hiding poor outcomes which emerged later.

In pre-registration, clinical trials should state in advance which endpoint(s) they are measuring. And the endpoint should be sufficiently reliable and accurate to measure the effect the investigators hope to achieve.


Not every patient complies with the treatment protocol as prescribed. Studies have shown that some people stop taking their treatment (sometimes without telling their health worker), while others miss some treatments for all sorts of reasons. If this happens more in one treatment group – for example in the real treatment group (perhaps because it causes side effects which the patient notices) but not the placebo then the outcome of the trial might be biased.

Some way of measuring adherence (and thus monitoring the overall dose of treatment received) should be included in the trial design.


One remaining challenge is making sure that all the results from all clinical trials are published and shared among the health care community. Investigators who find that their new treatment does not work out as they had hoped may be tempted to keep the results secret and not have them published, resulting in publication bias. This is particularly so if there are commercial interests involved. HealthWatch has strongly supported the current move to insist that all clinical trial results are published. See the AllTrials campaign.

Designing clinical trials that will provide unbiased results requires very careful consideration and carrying them out requires constant vigilance to make sure best practice is always observed. Nevertheless, such clinical trials form the backbone of evidence-based healthcare. Many thousands have been undertaken and their results have been published leading to huge improvements in outcomes for patients. The so-called ‘miracle’ of modern medicine is not a miracle at all but the outcome of fair and thorough research, rather than relying on received ideas and anecdote as in the past.

Claiming a health care benefit without valid clinical trial evidence is misleading, dishonest and is unfair to patients. It is also dangerous. For example, large outcome studies have found that patients who use untested remedies for cancer are twice as likely to die within seven years as those who only follow evidence-based treatments.

Prof John Kirwan

October 2019