Posted on September 18, 2014
Key message: Evidence Based Medicine is useful for informing healthcare professionals what works, what doesn’t, and helping to determine if the benefits outweigh the harms, but it’s far from perfect. There are valuable lessons learned about research that we can share across disciplines.
In 2005, Dr. John Ioannidis, a well-known meta-researcher, published an article in PLoS Medicine called Why Most Published Research Findings Are False. This article caused a splash and has been making waves in the medical research community ever since. His paper is a bit technical, go for it if you can, but I recommend everyone at least read the less technical, narrative write-up about his research here in The Atlantic. He raises a number of very serious issues that have plagued the medical research community. I’ll try to summarize some of his concerns.
These factors collectively have led Dr. Ioannidis to conclude that a large part of the evidence that doctors and healthcare providers have come to rely on, including major foundational studies used to treat patients, are frequently misleading, exaggerated, and often flat-out wrong.
When I first read his article, I was a bit shocked and in disbelief that Evidence Based Medicine could possibly be this wrong. The article left me begging the question, what now?
Thankfully, since his original article was published, many research and academic groups have made progress in several of these areas. In a 2014 JAMA article, Dr. Ioannidis returns to add suggestions for additional solutions, particularly in regards to changing the reward system from prioritizing quantity of research over quality of design. He recommends a list of reward criteria, or principles, to help appraise and identify desirable research methods. He calls it the “PQRST” which stands for Productive, Quality, Reproducible, Shareable and Translatable.
Let’s break this down.
Productive: This basically means setting a fixed definition as to what it means to be ‘productive’ in research. For example, the number of publications in top tier journals, % of citations for each scientific field per year. Not just simply to publish something somewhere for the sake of getting published.
Quality: This means setting high publication standards as appropriate in each field for research methods and study designs. This is important to ensure increased reliability and credibility of results. These standards should also be easily verifiable.
Reproducible: This means making sure the raw data and methods are clear, so other independent researchers can (and should) reproduce the study.
Sharable: This means registering and sharing of data materials and protocols of all trials.
Translatable: This means ensuring the research is relevant and can be applied in real-life settings.
The challenge in reproducing or reanalyzing previous studies was featured in the latest issue of JAMA and discussed in Richard Lehman’s BMJ blog. In the article, called “REanalyses of Randomized Clinical Trial Data, Dr. Ebrahim, who works as part of Dr Ioannidis’ team of researchers, found that only:
“A small number of reanalyses of RCTs have been published to date. Only a few were conducted by entirely independent authors. Thirty five percent of published reanalyses led to changes in findings that implied conclusions different from those of the original article about the types and number of patients who should be treated”
So we still have a long way to go. The good news is that research and efforts on improving Evidence Based Medicine is ongoing. It seems as though we are making progress by identifying weaknesses and addressing them.
I’d like to point out that Alice Buchan, an S4BE Pioneer, wrote a wonderful piece earlier this year based on a series of Lancet articles about increasing value and reducing waste in research here. Waste in research is, of course, a related topic and there have been some great ideas on how to improve priorities as well as reduce waste in research.
As Students 4 Best Evidence, we represent a variety of different medical disciplines. We all value research and evidence as part of the clinical decision making process. Perhaps these articles should give us pause about the state of our evidence and help us think about possible solutions for the so called “Evidence Based Medicine problem”.
What do these challenges mean for the state of research and evidence in our own respective disciplines? Have you seen any of these issues raised or solutions implemented in your field? How can we as students implement these ‘lessons learned’ into our respective fields and influence others to do so?
Ebrahim S, Sohani ZN, Montoya L, et al. “REanalyses of Randomized Clinical Trial Data.” JAMA 312, no. 10 (September 10, 2014): 1024–32. doi:10.1001/jama.2014.9646.
Ioannidis, John P. A. “Why Most Published Research Findings Are False.” PLoS Med 2, no. 8 (August 30, 2005): e124. doi:10.1371/journal.pmed.0020124.
Ioannidis JA, and Khoury MJ. “Assessing Value in Biomedical Research: The Pqrst of Appraisal and Reward.” JAMA 312, no. 5 (August 6, 2014): 483–84. doi:10.1001/jama.2014.6932.
Young, Neal S, John P. A Ioannidis, and Omar Al-Ubaydli. “Why Current Publication Practices May Distort Science.” PLoS Med 5, no. 10 (October 7, 2008): e201. doi:10.1371/journal.pmed.0050201.