Preaching to the converted: What is evidence-based medicine?
Posted on May 15, 2013 by Jamie Loan
Many people have a good rough idea of what evidence-based medicine (EBM) is. Everyone, I imagine, that is reading this will have a better idea than most about what it is and may need little convincing of its value. However, it is something that I personally have some difficulty with, so bear with me.
A quick flick through the Cochrane Collaboration website or PubMed will quickly throw up the well-known and somewhat circular, assertion that “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett, et al. 1996). This lists five adjectives: “conscientious”, “explicit”, “judicious” and “current” and “best”. At face value these appear quite simple and positive however, when applied to clinical practice, things can become more difficult. Each deserves an essay in itself, but, seeing as this is the “Students 4 Best Evidence” network, I shall start with the last: “Best”.
What is the “best evidence”?
Many tools have been developed to compare different types of evidence and these do so at different levels: appraisal of individual studies, appraisal of multiple studies of the same thing, and appraisal of all studies pertaining to a certain field (a disease, clinical question, or even an entire clinical specialty). First, individual studies have to be appraised to determine if they have internal validity. That is, are the methods used sufficient to meet the study’s aims, or reported in such a manner that the study’s conclusions can be relied upon? Sadly, this is often not the case (especially in preclinical medicine where methods of reducing bias – e.g. double blinding, randomisation, etc – may not be widely practiced (van der Worp, et al. 2010)) but also in randomised clinical trials (RCTs) – one of the (some may say “The”) most rigorous forms of comparative clinical intervention methodologies available (Sjogren and Halling. 2002). It is therefore essential that before moving on to more esoteric methods of comparing different items or levels of evidence, one must ensure that the published study is able to hold its own weight and is not simply very accurate measure of the biases arising from the study’s design (Ioannidis. 2005). Ultimately, this rarely comes down to a simple yes or no. Studies will have different risks that their result is simply due to chance or biases inherent the study’s methodology, these risks will arise for different reasons and the assessment of them is frequently subjective. If a number of studies exist that address your question using a relevant sample, it may be possible to pool their results. But when should this be done?
And how should it be judiciously selected?
The pooling of studies after a rigorous and transparent search process may be done in a qualitative or quantitative fashion. These techniques, known as systematic review and meta-analysis, respectively, are seen to underpin modern evidence-based medicine – as evidenced by the logo of the Cochrane Collaboration, which depicts a Forest plot – a key output of meta-analysis. However, they are not immune to the effects of bias and they do not always constitute the best evidence. This is because they can only be as good as the research that is included. It is therefore imperative that authors of these meta-studies are (to use another adjective from the above list) judicious in their inclusion of studies: inclusion of one poor study can poison the outcome of an entire meta-analysis (Wise. 2013). If there is a total lack of any studies that meet the bar for inclusion in such a study, or if, even following pooling, there is a lack of statistical power to address the question of interest, then the study must conclude that the question is currently unanswerable.
But what then if such a study has not included a study that has demonstrated a treatment effect, or what if, following pooling this effect has vanished? In these cases, the clinician must be able to critically appraise both the primary research and the meta-research and decide if the meta-researchers were correct or incorrect in their exclusion of the primary data, or the inclusion of other (maybe biased) data that could have swamped a real treatment effect. In such a scenario, the best evidence is no longer the meta-analysis or systematic review, but the primary research. In this way the clinician also must more apply critical appraisal than the authors of the meta-study and they cannot simply rely on peer review, much as one should not rely solely on peer review of any primary data.
The conscientious use of current evidence
So one must be judicious in selecting which evidence is the best to apply to a clinical setting. One must also be conscientious and use current evidence. I will lump these two together as I believe they make similar points. In being conscientious it is important that one does all one can to ensure that the evidence you have – however methodologically sound it is – applies to the patient in front of you: you must read it fully – including those tables describing study populations that one is always tempted to rush over. This is to answer the question “does it have external validity?” Many experts, (notably Dr. Ben Goldacre in his book “Bad Pharma”) have discussed the problems of some studies using patient groups that are not comparable to those often encountered in clinical practice. Ben Goldacre claims that these unusual groups are used for the purpose of making a drug look more effective. Whilst these practices are a problem, such studies are not entirely useless, so long as you see them for what they are (and so long as the result is real – i.e. not the outcome of a suspect subgroup analysis or primary outcome switching): if a drug is demonstrated to be useful in an unusual population then, if your patient could be included in it, the results of the study can be applied to your patient. When assessing this one must also content oneself that the proposed intervention used is the same (or sufficiently similar) to the one in the research paper in your hand. Another important manner by which a study’s outcome may not generalise to your patient is when a study is old. Do the results of a ground-breaking study conducted in 1980 apply to your patient? If supportive care has changed or other definitive interventions have been developed then they may not.
Finally, one must be explicit. At a clinical level, this means that, where a patient wants it, or when you are writing a paper to be used by anyone involved in patient care, you must clearly state what evidence you have used to base your conclusions upon as well as how, and for what reasons. Further, a conscientious medical practitioner would explicitly state weaknesses in the data that they are using. Obviously, this must be tailored to individual patient needs and the last thing that many patients will want in times of illness is a discussion of research flaws, but the clinician should be aware of them, flaws in their own knowledge and be ready to share this. This ensures that the patient is able to make informed choices about their treatment, and also in supporting research.
Outside of normal clinical practice, studies should be reported in a systematic manner that describes exactly what has been done and important things that were not done. In this way, a well conducted and reported study, that reveals no-effect is better in terms of its’ ability to inform clinical practice, than a positive study that attempts to hide possible biases in the write up.
So, what is Evidence-Based Medicine? Whilst it is an ideal for clinical practice as Sackett et al (1996) state, it is also an epistemological challenge: we must attempt to quantify not only how certain we are that the results of a study are real (and not statistical flukes), but also quantify how certain we are that these quantifications and the methods of arriving at them are correct. In essence then, EBM is, for me, the process of critical review of all things and their application, including the continual critical review of EBM itself as well as its application.
Ioannidis JPA. (2005) Why most published research findings are false. PLoS Med; 2(8): e124.
Sackett DL, Rosenberg WM, Gray JA, Haynes, RB, Richardson WS. (1996) Evidence based medicine: what it is and what it isn’t. BMJ; 312(7023):71-2
Sjogren P, Halling A. (2002) Quality of reporting randomised clinical trials in dental and medical research. British Dental Journal; 192:100-103
van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, et al. (2010) Can Animal Models of Disease Reliably Inform Human Studies? PLoS Med 7(3): e1000245. doi:10.1371/journal.pmed.1000245
Wise J. (2013) Boldt: the great pretender. BMJ; 246:f1738