Posted on July 7, 2017
This is the sixth in a series of 34 blogs based on a list of ‘Key Concepts’. Each blog will explain one Key Concept that we need to understand to be able to assess treatment claims.
A healthy amount of scepticism whenever anyone makes a claim about a treatment is important. This is the case even when the individual making the claim is considered to be an expert.
We need to distinguish between individuals who are experts because they take account of the best available research evidence and individuals who claim – or are considered to be – experts but who do not take account of the evidence. Crucially, ‘expert’ opinion can only be right if it’s based on the best available evidence.
Healthcare should not be about mindless ‘master-apprentice’ learning – where the opinions of experts are taken as gospel truths. Instead, proactive, critical thinking is necessary. You should question the claims made by experts, evaluating the reasoning and evidence behind their claims.
Now let’s look at some examples of why we can only trust expert opinions when these opinions have been formed by the best available evidence…
A classic story demonstrating how experts can be wrong – and how only rigorous research can provide the basis for claims about treatments – comes from 1747. At the time, there was uncertainty about how best to treat scurvy. ‘Expert’ opinion varied: some authorities recommended using vinegar, others recommended using sulphuric acid as a treatment. But it was James Lind, a surgeon, who resolved this uncertainty using a fair test. By carefully comparing 6 treatments, he demonstrated that fruits (lemon and oranges) were more effective than the alternative treatments supported by the authorities at the time.
Arthroscopic partial meniscectomy (a particular type of knee operation) has been one of the most common orthopedic treatments in use. The theory (i.e. the explanation of how the intervention should work) and perceived efficacy of this treatment has been widely accepted and adopted by practitioners. Recently however, teams of surgeons have started to doubt the efficacy so far as to compare the treatment to sham surgery.
Following a comprehensive analysis of all of the available evidence, robust results are in, and the doubters were right. The relevant average effects are small at best, with extremely important implications for practitioners. Adopting a more conservative approach to treatment (by not rushing in to carry out surgery) could not only save money, but could prevent patients from undergoing surgery which may not be necessary.
The fundamental idea of evidence-based practice is that one can do the best job in healthcare by seeking and integrating the best clinical evidence, clinical expertise, and patient/consumer preferences. Expert opinions alone are not a sufficiently reliable basis to practise healthcare, nor to know the effects of treatments.
The source of a claim is irrelevant to the truthfulness of the claim. Only the process of generating knowledge – through rigorous research – has any ‘authority’. It’s about understanding the evidence-base behind a claim, not just accepting a claim at face value.
For example, why should a layman have any confidence in the claim that the universe is around 14 billion years old, rather than thousands or trillions years old? You should not accept this claim just because it’s claimed to be true, but rather because there has been rigorous calculations carried out to address this question.
The more you educate yourself about how a particular conclusion has been reached, or why a given claim is being made, the less you simply have to ‘bet on’ the claims of others.
Doctors, researchers, patient organisations and other authorities often disagree about treatment effects. Some difference of opinion and controversy in treatment policies is difficult to avoid (for instance, where there is a lack of research on a particular treatment or health problem). However, other differences in opinion may simply be because researchers, doctors etc. are not taking account of systematic reviews of fair comparisons of treatments.
Without taking into account all of the available evidence on a particular treatment, differences in opinion about the safest, most effective treatments is inevitable. And just like everyone else, experts are prone to bias and errors of reasoning. So if clinicians simply do what they think is best, rather than basing their decisions on all the available evidence, they may not be adopting the best treatment policy. And variation from the current best treatment policy can be harmful.
There are clues that can help in predicting whether experts (or anyone, for that matter) might be making a reliable claim. For instance, you may be more inclined to trust a claim if it has been made by an individual who does not have any conflicting interests. (On the contrary, you may be more sceptical of a claim about a new ‘miracle’ drug if you see that the person making the claim has financial links to the pharmaceutical industry!).
Nonetheless, there is no real substitute for a claim that is based on a systematic review of the evidence. So don’t judge a book by its cover and assume that an ‘expert’ is correct just because they are considered an ‘expert’. Don’t rely on opinions of experts about the effects of treatments unless they’ve clearly based their claims on fair, systematic evaluations of treatments.