The Evidence Based Medicine problem and what to do about it
Posted on 18th September 2014 by Danny Minkow
Key message: Â Evidence Based MedicineÂ is useful for informing healthcare professionals what works, what doesn’t, and helping to determine if the benefits outweigh the harms, but itâ€™s far from perfect. There are valuable lessons learned about research that we can share across disciplines.
What is the Evidence Based Medicine problem?
In 2005, Dr. John Ioannidis, a well-known meta-researcher, published an article in PLoS Medicine called Why Most Published Research Findings Are False. This article caused a splash and has been making waves in the medical research community ever since. Â His paper is a bit technical, go for it if you can, but I recommend everyone at least read the less technical, narrative write-up about his research here in The Atlantic. Â Â He raises a number of very serious issues that have plagued the medical research community. Iâ€™ll try to summarize some of his concerns.
- Publication bias, which means that if researchers find that an intervention had little or noÂ effect, then those studies are less likely to get published. In other words, not publishing negative findings.
- Overconfidence of studies that rely on statistical p-values of 0.05. This could lead to false positives as well as false negatives.
- Lack of replication since many important studies are not repeated, or even â€˜repeatableâ€™, by other independent researchers to verify the results. Some major studies that have been replicated have found surprisingly different and even contradictory results.
- Small study size and underpowered methodologies. The lower the number of subjects in the study, the less likely the results are true.
- Selective outcome reporting, manipulation of data/fraud and financial conflicts of interest. The greater the financial interest, the less likely the study will be true.
- The greater the flexibility in designing studies and in definitions, the less likely the research findings are to be true.
- The reward systems within medical research, particularly in academia, incentivizes quantity of publications over quality of research.
These factors collectively have led Dr. Ioannidis to conclude that a large part of the evidence that doctors and healthcare providers have come to rely on, including major foundational studies used to treat patients, are frequently misleading, exaggerated, and often flat-out wrong.
When I first read his article, I was a bit shocked and in disbelief that Evidence Based Medicine could possibly be this wrong. The article left me begging the question, what now?
What can we do about this Evidence Based Medicine problem? How do we produce better research?
Thankfully, since his original article was published, many research and academic groups have made progress in several of these areas. In a 2014 JAMA article, Dr. Ioannidis returns to add suggestions for additional solutions, particularly in regards to changing the reward system from prioritizing quantity of research over quality of design. Â He recommends a list of reward criteria, or principles, to help appraise and identify desirable research methods. Â He calls it the â€œPQRSTâ€ which stands for Productive, Quality, Reproducible, Shareable and Translatable.
Letâ€™s break this down.
Productive: This basically means setting a fixed definition as to what it means to be â€˜productiveâ€™ in research. For example, the number of publications in top tier journals, % of citations for each scientific field perÂ year. Not just simply to publish something somewhere for the sake of getting published.
Quality: This means setting high publication standards as appropriate in each field for research methods and study designs. This is important to ensure increased reliability and credibility of results. These standards should also be easily verifiable.
Reproducible: This means making sure the raw data and methods are clear, so other independent researchers can (and should) reproduce the study.
Sharable: This means registering and sharing of data materials and protocols of all trials.
Translatable: This means ensuring the research is relevant and can be applied in real-life settings.
The challenge in reproducing or reanalyzing previous studies was featured in the latest issue of JAMA and discussed in Richard Lehmanâ€™s BMJ blog. In the article, called â€œREanalyses of Randomized Clinical Trial Data, Dr. Ebrahim, who works as part of Dr Ioannidis’ team of researchers, found that only:
Â â€œA small number of reanalyses of RCTs have been published to date. Only a few were conducted by entirely independent authors. Thirty five percent of published reanalyses led to changes in findings that implied conclusions different from those of the original article about the types and number of patients who should be treatedâ€
So we still have a long way to go.Â The good news is that research and efforts on improving Evidence Based MedicineÂ is ongoing. It seems as though we are making progress by identifying weaknesses and addressing them.
Conclusions & comment
Iâ€™d like to point out that Alice Buchan, an S4BE Pioneer, wrote a wonderful piece earlier this year based on a series of Lancet articles about increasing value and reducing waste in research here. Waste in research is, of course, a related topic and there have been some great ideas on how to improve priorities as well as reduce waste in research.
As Students 4 Best Evidence, we represent a variety of different medical disciplines. We all value research and evidence as part of the clinical decision making process. Perhaps these articles should give us pause about the state of our evidence and help us think about possible solutions for the so called “Evidence Based Medicine problem”.
What do these challenges mean for the state of research and evidence in our own respective disciplines? Have you seen any of these issues raised or solutions implemented in your field? How can we as students implement these â€˜lessons learnedâ€™ into our respective fields and influence others to do so?
Ebrahim S, Sohani ZN, Montoya L, et al. â€œREanalyses of Randomized Clinical Trial Data.â€ JAMA 312, no. 10 (September 10, 2014): 1024â€“32. doi:10.1001/jama.2014.9646.
Ioannidis, John P. A. â€œWhy Most Published Research Findings Are False.â€ PLoS Med 2, no. 8 (August 30, 2005): e124. doi:10.1371/journal.pmed.0020124.
Ioannidis JA, and Khoury MJ. â€œAssessing Value in Biomedical Research: The Pqrst of Appraisal and Reward.â€ JAMA 312, no. 5 (August 6, 2014): 483â€“84. doi:10.1001/jama.2014.6932.
Young, Neal S, John P. A Ioannidis, and Omar Al-Ubaydli. â€œWhy Current Publication Practices May Distort Science.â€ PLoS Med 5, no. 10 (October 7, 2008): e201. doi:10.1371/journal.pmed.0050201.