An independent charity for science and integrity in healthcare

Read the latest HealthWatch newsletter:  Issue 106, Autumn 2017

Sir Iain Chalmers: for his critical contribution to EBM

Presenting the 2009 HealthWatch Award, Nick Ross said, “Iain Chalmers has saved more people’s lives than anyone else I can think of.” Iain Chalmers, editor of the James Lind Library, has for the last 30 years championed the need for health professionals and patients to have access to unbiased evidence on which to base clinical decisions. His talk on the development of fair tests of treatments in health care was illustrated with examples going as far back as 1500BCE, all taken from the James Lind Library’s extensive and publicly searchable online archives.

Iain Chalmers has devoted the last 30 years to efforts to help ensure that health professionals and patients have access to unbiased evidence on which to base their treatment decisions, most famously through his work as one of the co-founders of the Cochrane Collaboration. He was at the first ever HealthWatch meeting back in 1992, and since then he’s been a friend and valued critic, and prepared to make a pointed comment if he ever believes HealthWatch has failed to apply to itself the standards it expects of others. “It’s important to be even-handed, for us all to be judged by the same rules. If we depart from this, then we’ll be open to the accusation of double standards,” he said by introduction to his talk at the 2009 HealthWatch AGM. Chalmers, who received a knighthood in 2000 for services to healthcare, applies his passion for fairness now as editor of the James Lind Library [1], created to help people understand fair tests of treatments in health care.

The article below is based on the talk given by Iain Chalmers at the HealthWatch AGM 2009, and is a fuller version than the one which appeared in the print version of the newsletter, which was edited for reasons of space.

The subject of my talk is explaining fair tests of treatments in health care, and in this we have much unfinished business. I’d like to begin by introducing some of my special heroes in the field. Margaret McCartney, the Glasgow GP who writes a health column for the Financial Times every week [2], was a worthy recipient of last year's HealthWatch Award. I’d include the writers of some of my top books: Smart Health Choices by Judy and Les Irwig [3]. Judy Irwig is a mother, Les Irwig a professor of clinical epidemiology, and their book explains clearly and authoritatively how not to be bamboozled by what you read in the media about health. Know Your Chances [4], in which American doctors Steve Woloshin and Lisa Schwartz explain how to interpret health statistics, is special because an early draft was itself subjected to a randomised controlled trial to see if it actually increased its readers’ knowledge of the subject. Another young British doctor, Ben Goldacre (HealthWatch’s 2006 Award winner), has shaken things up for science knowledge in this country with his Guardian column “Bad Science” [5]. He writes, “Evidence-based medicine, the ultimate applied science… has saved millions of lives, but there has never once been a single exhibit on the subject in London’s Science Museum.”

However his fellow Guardian writer and HealthWatch Award winner, Polly Toynbee, did not make my hero list. As I pointed out in the April 2004 HealthWatch Newsletter [6], she once wrote that randomised clinical trials should be abandoned. “It may be a little less accurate scientifically,” she had written, “but if patients are allowed to choose which treatment they want and every detail of their condition, lifestyle, character and circumstances is fed into the trial data, I doubt if the results would be seriously distorted,” making clear her reluctance to agree to be a “guinea pig”. She completely failed to confront the fact that you often get very different results depending on design of the trial.

A few journalists—among them Nick Ross—understand evidence based medicine, and are prepared to battle against the stereotypes. On the 2nd April 2001 Nick was amongst fifty people who met to consider how to get the public to appreciate randomised controlled trials. There’s a problem with the name: it has so many negatives associated with it. “Randomised” suggests haphazard. “Controlled” implies controlling. “Trials” has legal connotations. It was Nick Ross who suggested, “why not call them fair tests?”

James Lind, a pioneer of fair tests, was a naval surgeon in the 18th Century and a member of the Society of Naval Surgeons (whose members went on to found the Medical Society of London). Like many who favour quantifying outcomes, he was something of an outsider. It’s harder to ask the question, “Is the Emperor wearing clothes?” when you’re a member of the Emperor’s establishment. It’s a problem that remains with us today. No matter how fair the test itself, the interpretation of science continues to be distorted by those who have a vested interest in the results, other than the well-being of patients.

The James Lind Library was launched by the Library of the Royal College of Physicians in Edinburgh in 2003. It has an online archive of illustrative records, from 1550 BCE to the present, illustrating how fair tests developed [2]. These make clear that many of the principles of fair tests that we still use today go back hundreds, even thousands of years.

Conceptualising fair tests of treatments

In an extract of a letter written in 1364 [3], the Italian poet Francesco Petrarca wrote, “I solemnly affirm and believe, if a hundred or a thousand men of the same age, same temperament and habits, together with the same surroundings, were attacked at the same time by the same disease, that if one half followed the prescriptions of the doctors of the variety of those practising at the present day, and that the other half took no medicine but relied on Nature’s instincts, I have no doubt as to which half would escape.”

Treatments with dramatic effects

Even earlier, we have a surgical papyrus dated from around 1550 BCE which has been translated to reveal an explanation of how to reduce a dislocated mandible. It describes exactly what we do today, yet it was written more than 3,000 years ago. You don’t need carefully controlled trials to prove a treatment which is so clearly effective.

Recognizing the needs for controls

In the 10th Century CE, the Baghdad doctor Abu Bakr Muhammad ibn Zakariyya al-Razi (Rhazes), wrote on his experience of treating meningeal inflammation, noting the characteristic symptoms of photophobia, neck stiffness and headache. He wrote, “So when you see these symptoms, then proceed with bloodletting. For I once saved one group [of patients] by it, while I intentionally neglected [to bleed] another group. By doing that, I wished to reach a conclusion.” If this sounds rather barbaric remember it’s the way of thinking that’s important—he realised that he needed an untreated group in order to make an inference about the effects of his treatments.

Prospective experiments

The James Lind Library records a 16th Century example of a within-patient prospective controlled trial. “A kitchen boy fell into a cauldron of almost boiling oil…” wrote the French royal surgeon Ambroise Paré in 1575. “I went to ask an apothecary for the refrigerant medicines that one was accustomed to apply to burns. A good old village woman, hearing that I was speaking of this burn, advised me to apply for the first dressing raw onions crushed with a little salt… I was agreeable to trying the experiment and, truly, the next day, the places where the onions had been had no blisters or pustules, and where they had not been, all was blistered.”

During 18th Century naval campaigns more sailors were being killed by scurvy than by the fighting. One of several recommended treatments at the time was vitriol (sulphuric acid), which was favoured by the Royal College of Physicians of London. Of one of the earliest known reports of a clinical trial, the naval surgeon James Lind wrote in 1753, “…I took twelve patients in the scurvy… Their cases were as similar as I could have them. They all in general had putrid gums, the spots and lassitude, with weakness of their knees. They lay together in one place, being a proper apartment for the sick in the fore-hold; and had one diet common to all.” Lind allocated two sailors with scurvy to each of: “a quart of cider a day; twenty-five gutts of elixir vitriol three times a day; two spoonfuls of vinegar three times a day; a course of sea water… half a pint each day; two oranges and one lemon every day; the bigness of a nutmeg three times a day.” The most sudden and visible effects were seen amongst the seamen taking the fruit.

“Blinding” assessment of outcomes

The report of the homeopathic salt trials in Nuremburg in 1835 contains a detailed description of a randomized double-blind experiment in which participants were given either a homeopathic salt solution or pure distilled snow water. The details of which numbered bottles had contained which liquid were kept sealed until the end of the experiment. The experiences of the participants in the two groups were indistinguishable. One should bear in mind that homeopathic care in the late 18th and early 19th century was almost certainly safer than the bleeding, purging and use of heavy metals by orthodox practitioners.

Recognising the “law of large numbers” and the “limits of oscillation”

The idea of using numerical data to justify conclusions about treatments goes back at least three centuries. Pioneering work on how to apply inferential statistics to therapeutic data in order to make critical judgments on the value of therapies was published in Paris in 1840 by Louis-Dominique-Jules Gavarret. According to his beautifully written Principes Généraux de Statistique Medicale, “Average mortality, as provided by statistics, is never the exact and strict translation of the influence of the test medication but approaches it all the more as the number of observations increases. To be able to decide in favour of one treatment method over another, it is not enough for the method to yield better results: the difference found must also exceed a certain limit, the extent of which is a function of the number of observations.” Hence, the need to estimate what he calls “the limits of oscillation” (confidence intervals).

Confidence in results can be increased by examining the results of multiple trials. A key paper in the history of meta-analysis is Karl Pearson’s 1904 report in the British Medical Journal on “certain enteric fever inoculation statistics” which looked at correlations between typhoid and mortality and the inoculation status of soldiers serving in various parts of the British Empire.

In the early 20th Century important advances in study design were implemented in the USA in a programme of research to assess serum treatments for pneumonia. The trials in the programme demonstrated many of the important features of fair tests, involving large numbers of patients, allocation to treatment or control groups using an unbiased process (alternation), an assessment of the likelihood that observed differences could be explained by chance, and meta-analysis of the results of similar studies.

Recognising reporting bias

The English philosopher and statesman Francis Bacon, in his 1620 “New Instrument for the Sciences” commented, “It is a proper and perpetual error in Human Understanding, to be rather moved and stirred up by affirmatives than by negatives…” This is still as true today, and it can kill. Dr Cowley and his colleagues wrote in 1993 how, in an unpublished study done 13 years before, nine patients had died among the 49 assigned to an anti-arrhythmic drug (lorcainide) compared with only one patient among a similar number given placebos. “We thought that the increased death rate that occurred in the drug group was an effect of chance… The development of the drug was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of ‘publication bias’. The results described here…might have provided an early warning of trouble ahead.” [4] In his 1995 book Deadly Medicine [5], the American author Thomas J Moore estimated that at the peak of their use in the late 1980’s, these widely-used anti-arrhythmic drugs killed as many Americans every year as were killed during the whole of the Vietnam war.”

Recognising the need for a cumulative science

In 1884 Lord Rayleigh, professor of physics in Cambridge and President of the British Association for the Advancement of Science, said, “If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight… The work which deserves, but I am afraid does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out.”

In 1965 the English epidemiologist and statistician, Austin Bradford Hill, framed the four questions to which readers want answers when reading reports of research: Why did you start? What did you do? What answer did you get? And, what does it mean anyway?

An example that lives up to Bradford Hill’s expectations is the CRASH research into the effects of systematic corticosteroids in acute traumatic brain injury. The research was started because practice varied and a systematic review of existing studies (some of which had never been published) revealed important uncertainty about whether systematic steroids did more good than harm. To address this important uncertainty a large publicly-funded, multi-centre randomized trial—called the CRASH trial—was organised. The results, which were published in the Lancet in 2004 [6] revealed that this treatment had been killing people since it was first used nearly 40 years previously.

The report of the CRASH trial is exemplary because it referred to current uncertainty about the effects of a treatment, manifested in a systematic review of all the existing evidence, and in variations in clinical practice; it noted that the trial was registered and the protocol published prospectively; it set the new results in the context of an updated systematic review of all the existing evidence; and it provided readers with all the evidence needed for action to prevent thousands of iatrogenic deaths.

In summary, science is cumulative, so researchers must cumulate scientifically, using methods and materials to reduce biases and the play of chance. Because researchers still do not do this routinely, people continue to suffer and die unnecessarily.

Iain Chalmers

Editor, James Lind Library

References

  1. The James Lind Libraryhttp://www.jameslindlibrary.org/
  2. Margaret McCartney writing for the Financial Times http://blogs.ft.com/healthblog/
  3. Smart Health Choices by Les and Judy Irwig was published November 2007 by Hammersmith Press Ltd, paperback £12.99.
  4. Know Your Chances by Steve Woloshin was published November 2008 by California Press, paperback £11.95.
  5. Bad Science by Ben Goldacre was published in paperback edition April 2009 by HarperPerennial at £8.99.
  6. Chalmers I, HealthWatch Newsletter, issue 53, April 2004.
  7. The James Lind Library’s archives can be browsed on http://www.jameslindlibrary.org/trial_records/published.html
  8. This and the following texts can be accessed on the James Lind Library website by browsing the records, listed in chronological order.
  9. Cowley AJ, Skene A, Stainer, Hampton JR (1993). The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction. International Journal of Cardiology 40:161-166.
  10. Moore TJ (1995). Deadly Medicine. New York: Simon and Schuster
  11. The Lancet, Volume 364, Issue 9442, Pages 1321 - 1328, 9 October 2004.

Share