Evidence-based health practice: a fairytale or reality?
Posted on 1st November 2017 by Leonard Goh
This blog, written by Leonard Goh, was the winner of Cochrane Malaysia and Penang Medical College’s recent evidence-based medicine blog writing competition. Leonard has written an insightful and informative piece to answer the question: ‘Evidence-based health practice: a fairytale or reality’.
Details of the other winners are here – many congratulations to you all.
The philosophy of EBM has its origins in the 19th century, though it was only relatively recently in 1991 that Gordon Guyatt first coined the term in his editorial, espousing the importance of a clinician’s ability to critically review literature and synthesize the new findings into his practice . His comments sparked off the EBM movement in a medical fraternity that was increasingly unsatisfied with basing clinical practice on anecdotal testimonials, leading to the worldwide incorporation of EBM classes into both undergraduate and postgraduate programmes, as well as workshops for already-practicing clinicians.
It has to date percolated into the public arena, to the point where it is not uncommon to hear patients asking, “so what is the evidence for this treatment you are proposing?”
EBM was formally defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” in a 1996 article , and subsequently revised in 2000 to mean “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” . The revision attempted to reflect equal emphases on the clinician’s individual clinical competency and cultural competency, addressing the common criticism of its prior definition that it was “cookbook medicine” that negated individual clinical expertise and the choice of patients.
There is a constant struggle to balance clinical decisions in accordance with the three ideals, especially if current best evidence contradicts individual experience and patient choice. But the main issue with EBM is, in my opinion, the validity and applicability of research results to the real world. Research conclusions are drawn completely from statistical analysis, yet a vast majority of clinicians lack a proper in-depth understanding of statistics to come to a correct conclusion.
Take the p-value for instance. It is commonly understood that results are statistically significant if the corresponding p-value is below 0.05, and statistically insignificant if the p-value is above 0.05. However, this dichotomy is simply not true.
The p-value threshold of 0.05 is an arbitrarily defined convention that was first proposed by RA Fisher in 1925 and has since been used for convenience sake. Furthermore, p-value significance should in fact be seen as a continuum; results with a p-value of 0.049 is not markedly more significant that one with a p-value of 0.051, yet we proclaim the former a significant finding and disregard the latter. This invariably leads to faulty reporting of study conclusions .
Digging deeper into the principles of p-value, it becomes apparent that there is a subtle difference between our commonly conceived notion of it and its exact meaning that can elude even the most astute of clinicians – the p-value does not comment on the truth of a null hypothesis, but rather is about the compatibility of the research data with the hypothesis. To remedy this, the American Statistical Association published a statement on statistical significance and p-values , but we can be sure it will take a significant (pun unintended) amount of time before these deeply entrenched misunderstandings are rectified.
This insufficient understanding of statistics means that research studies are vulnerable to statistical manipulation, whether unintentional or otherwise. Indeed, Stephen Ziliak and Deidre McCloskey, authors of the book The Cult of Statistical Significance, estimate that between 80% and 90% of journal articles have serious flaws in their usage of statistics; even papers published by the New England Journal of Medicine were not spared.
On top of that, huge numbers of papers published every day, making it a Herculean task to determine what constitutes “current best evidence” – it takes time to do a literature search, identify papers, and evaluate them, time which our clinicians simply do not have. Even if correct methodologies are in place, there is concern that claimed research findings may possibly be merely accurate measurements of the prevailing bias .
Adding insult to injury is the presence of predatory open access journals in the publishing industry. Neuroskeptic, an anonymous neuroscientist-blogger, illustrated this by submitting a Star Wars-themed spoof manuscript that was absolutely devoid of scientific rigor to nine journals, of which the American Journal of Medical and Biological Research accepted, and the International Journal of Molecular Biology: Open Access, Austin Journal of Pharmacology and Therapeutics, and American Research Journal of Biosciences published .
While the Neuroskeptic’s intent was not to make a statement regarding the brokenness of scientific publishing but rather to remind us that some journals falsely claim to be peer-reviewed, this concomitantly highlights the very real probability of better-concealed, intellectually-dishonest papers masquerading as legitimate science, impeding efforts to uphold an evidence-based health practice.
Is evidence-based medicine then an unrealisable fairytale?
To conclude as such would perhaps be exceedingly harsh. Yes, our practice of evidence based medicine is flawed, as pointed out in the preceding paragraphs. That, however, is not to suggest that we should cease striving to improve upon it. Evidence based medicine represents our best hope in ascertaining that we provide our patients with the best available treatment options, and we should persevere in our endeavours to further actualise this fairytale into reality.
But should this be our utmost priority?
In today’s world, where our medical profession is increasingly governed by statistics and algorithms, it is easy to mistake evidence based medicine as a panacea. We would however do well to remember that as much as it is a science, medicine is also an art. It is crucial that we do not lose sight of our raison d’être – to cure sometimes, relieve often, and comfort always.
- Guyatt GH. Evidence-based medicine. ACP J Club [Internet]. 1991 [cited 2017 Sept 25]:A-16. Available from: http://www.acpjc.org/Content/114/2/issue/ACPJC-1991-114-2-A16.htm
- Komatsu RS. Evidence based medicine is the conscientious, explicit, and judicious use of current evidence in making decisions about the care of individual patients [Abstract]. Sao Paulo Med J [Internet]. 1996 [cited 2017 Sept 25];114(3):1190-1. Available from: https://www.ncbi.nlm.nih.gov/pubmed/9181752
- Sackett DL, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. 2nd London: Churchill-Livingstone, 2000
- Hackshaw A, Kirkwood A. Interpreting and reporting clinical trials with results of borderline significance. BMJ [Internet]. 2011 [cited 2017 Sept 25];343:d3340. Available from: http://www.bmj.com/content/343/bmj.d3340
- Wasserstein RL, Lazar NA. The ASA’s statement on p-values: context, process, and purpose. Am Stat [Internet]. 2016 [cited 2017 Sept 25];70(2):129-33. Available from: http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108
- Ioannidis JPA. Why most published research findings are false. PLoS Med [Internet]. 2005 [cited 2017 Sept 25];2(8):e124. Available from: https://doi.org/10.1371/journal.pmed.0020124
- Predatory journals hit by ‘Star Wars’ sting [Internet]. 2017 [cited 2017 Sept 25]. Available from: http://blogs.discovermagazine.com/neuroskeptic/2017/07/22/predatory-journals-star-wars-sting/#.WcfXB8gjE2z