Peer-review and publication does not guarantee reliable information
Posted on 16th January 2018 by Dennis Neuen
This is the twenty-second blog in a series of 36 blogs based onÂ a list of â€˜Key Conceptsâ€™.Â Each blog will explain one Key Concept that we need to understand to be able to assessÂ treatment claims.Â
What is peer-review?
A peer-reviewed journal article is an article which has been independently assessed by others working in the same field.
Peer-review can occur both before and after publication, but pre-publication review is considered standard practice in academia. The concept of peer-review dates back to the 1731, when the Royal Society of Edinburgh distributed a notice stating :
â€œMemoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the authorâ€.
Fast forward to today and peer-review is still valued as quintessential for quality control and functioning of the scientific community. The Royal Society, Britainâ€™s national academy of sciences, proudly describes :
â€œPeer review is to the running of the scientific enterprise what democracy is to the running of the countryâ€
How does peer review work?
Peer review is highly variable between journals with no universal process. A summary of the standard process is shown in Figure 1. There are three main types of peer review, which The Royal Society of Edinburgh outlines :
- Single-blind review
- The most common in medical journals, where the author and institution are known to the reviewer, but the reviewerâ€™s name remains anonymous to authors
- Example: New England Journal of Medicine (NEJM)
- Double-blind review
- Neither author or reviewer are known to each other, only the Editor knows their identities
- Example: Medical Journal of Australia (MJA)
- Open review
- Authors and reviewers are known to each other
- Example: British Medical Journal (BMJ)
Is there bias associated with peer-review?
Peer-review is by no means perfect. It is itself subject to bias, as most things in research are. Evidence from a peer-reviewed article does not make it reliable, based only on that fact.
For example, there is evidence suggesting poor interrater agreement among peer-reviewers, with a strong bias against manuscripts that report results against reviewersâ€™ theoretical perspectives . Although a studyÂ reported in the Journal of General Internal Medicine showed that reviewers agreed barely beyond chance on Â recommendations to accept/revise vs. reject, editors nevertheless placed considerable weight on reviewer recommendations . In addition, it has been shown that large numbers of public reviews are more thorough in reviewing academic articles than a small group of experts .
There is also ongoing debate about reviewer bias in the single-blind peer review process. Some suggest that if reviewers know the identity of authors, there may be implicit bias against women  and those with foreign last names or from less prestigious institutions . Therefore, some researchers argue that double-blind peer review is preferable .
In addition, some argue that, for multidisciplinary articles, it is difficult to recruit reviewers who are well-versed in all the relevant methodologies since they which tend to cover multiple different topics in a single study, which works against authors of such papers .
Examples from the past
No matter what review system is used, or potential bias it could create, there is always the potential for major and minor errors to be missed:
- Vaccination and Autism
This is arguably the most famous retracted journal article in history. Andrew Wakefield reported a small study in The Lancet which he claimed suggested that measles, mumps and rubella (MMR) vaccination might cause autism. Wakefield selected participants and changed and manipulated diagnoses and clinical historiesÂ to promote undisclosed financial interests . This paper resulted in a rise in measles and mumps, serious illness and some deaths.
- Deliberate errors inserted
A BMJ article deliberately inserted eight errors into a 600-word report of study about to be published and then sent it to 300 reviewers . The median number of errors spotted was two. Twenty per cent of reviewers did not spot any errors. Major errors were overlooked, such as methodological weaknesses, inaccurate reporting of data, unjustified conclusions and minor errors such as omissions and inaccurate reporting of data .
- COOPERATE study
The COOPERATE study investigated therapy with angiotensin-converting-enzyme inhibitor and angiotensin-II receptor blocker, finding that a combination was better than monotherapy in non-diabetic renal disease . This study was published in the Lancet in 2003, and was retracted due to the discovery of major flaws. Contrary to what had been reported, the trial was never approved by an ethics committee, the lead author had lied about obtaining informed consent, the involvement of a statistician could not be verified, the treatment was not double-blind since the lead author was aware of the allocation schedule and the committee was unable to establish the authenticity of the sample of data produced by the lead author .
So what can we do?
Whilst peer-review cannot be exactly blamed for missing some of these errors (e.g. Wakefieldâ€™s manipulation of data, COOPERATEâ€™s lead author lying about ethics approval), it reminds us that peer-review does not guarantee reliability. Some things are beyond our control, however, here are some things we can do:
- Critically appraise the article yourself, especially the Methods section
Donâ€™t just read the abstract or main results. Read the paper from start to finish, especially the Methods section. Critically appraise the paper yourself, with help from some of the other â€˜Key Conceptâ€™ blogs from our series. Ask yourself, which features could lead to bias? And just as importantly, what is not included but should be included which Â could cause bias?
Critical appraisal and assessing the risk of bias is not a skill that can be picked up overnight. One way of making critical appraisal easier and more structured is by using Critical Appraisal Tools (CATs) or checklists, such as those offered by the Critical Appraisal Skills Programme (CASP) UK, Scottish Intercollegiate Guidelines Network (SIGN) or the Centre for Evidence-Based Medicine (CEBM). Other resources that might be helpful are the EQUATOR network guidelines, which have handy checklists for each study design, to promote accurate and transparent reporting. Students 4 Best Evidence have collated a list of these CATs and others used all over the world that you can find here.Â Be aware that these tools can also lead to bias, but they are a good starting point for those learning about how to appraise evidence.
- Maintain a healthy dose of scepticism
We donâ€™t believe everything that is on the internet or everything that is shown on the TV. Similarly, we need to be able to critically appraise information, regardless of it being published in high impact journals such as the NEJM or Lancet. Itâ€™s not the journalâ€™s impact factor that matters; it is the quality of the article itself â€“ which you can assess yourself. Is it better to have a working second-hand Hyundai than a Lamborghini without wheels? Perhaps the phrase should be, donâ€™t judge an academic article by its journal.
Editorial peer-review remains a cornerstone of academic medical scholarship , and it is widely regarded as promoting high-quality reports of research. However, surveys of the quality of reports of medical research make clear that peer review does not guarantee adequate reports of research. Furthermore, Cochrane reviews of research assessing the effects of peer-review make clear that the process does not deliver what it is widely assumed it achieves. We must critically appraise articles ourselves to maximise the chance of catching mistakes that have been missed during the peer-review process.