Posted on April 10, 2017
In this blog, Saul discusses problems with peer review in modern research, and possible solutions to these.
Peer review is the process by which research is scrutinised fairly but comprehensively to ensure it is robust and of sufficient quality for dissemination. The origins of peer review are not fully known, but many believe its modern practice in journalism was founded during the 18th century . The concept then found its way into the scientific community across the 19th and 20th centuries, as a way to handle the rapidly increasing number of research articles for print publication .
The peer review process begins with the researcher completing a manuscript and deciding to send their precious work to a relevant journal. Once the article lands on the editor’s desk, they will assess it for appropriateness. If the research fits the journal’s aims, they will invite various reviewers to examine the manuscript. After a nervous wait for the author(s), the journal then decides whether to publish it, influenced heavily by the expert reviews. Well, what’s wrong with that? In theory, this appears wonderful. However, in practice, peer review is not without its problems…
Time and effort
Peer review is a long process. From sending a paper to finding reviewers, there is substantial time and effort invested. No one enjoys this lengthy process, it’s a large task for journals and a dent in efficiency for researchers.
Not only is identifying available, yet qualified reviewers a major difficulty in modern peer review, there are sometimes issues with those selected. Occasionally, reviewers are assigned to a paper that they don’t entirely understand, resulting in potentially bad research slipping through to publication. Fortunately, the problem of excellent research being rejected through misunderstanding is far less common.
Peer review is not only a long process, it’s a costly one and it’s difficult to quantify the full range of expenses. Administrative costs make up a significant burden here, but journals are also a business. Publishing poor quality research can result in a lower number of average reads, stunting growth and potentially causing losses in revenue. Additionally, a reviewer’s time and work is worth something, but this is often not rewarded financially.
Most peer review processes are single-blinded. The authors are unaware who the reviewers are, but the reviewers often know who the authors are and their subsequent affiliations. As a result, there is the potential for bias in the reviews based on seniority, gender, personal relations and rivalries. This is not favourable when the aim is to objectively assess the quality of the research.
The value of modern peer review is somewhat controversial. Its robustness is often under question when even a former editor of the Lancet, Robbie Fox, used to joke that their peer review method was “throwing a pile of papers down the stairs and then publishing the ones that reached the bottom”. It appears that as science continues to grow, peer review must begin to adapt along with it.
Double-blinding has been a major step forward for randomized controlled trials, ensuring that bias is minimised. It seems odd that this technique has not found its way to peer review, the very gatekeeper for scientific dissemination and progression. In theory, blinding both authors and reviewers allows the research content and quality to be evaluated more fairly.
Despite this promising idea, initial trials show that blinding reviewers to authors’ identities does not significantly improve review quality [3,4]. It has been argued, however, that the ability to identify authors through internal manuscript and referencing clues may have been responsible for this lack of improvement.
Conversely, if reviewers are meticulously scanning papers in an effort to discover who wrote it, then they are likely to have read the research thoroughly, something which can be severely lacking with current peer review. Also, it may encourage authors to reference a wider range of sources and limit self-citation, in an effort to prevent the blinded reviewers from detecting who they are.
Since March 2015, Nature added the option for authors to select if they would like to remain anonymous. Although this step towards peer review improvement should be commended, the fact that the process is optional, arguably defeats the purpose. It could lead well-established, high profile researchers to keep their names and affiliations, whilst reviewers may well assume all of the anonymous manuscripts are from unknown or low-profile researchers.
A 2009 survey showed double-blind peer review to be the most popular type of peer review process . However there are concerns that implementing such a system will further reduce the number of willing and able reviewers.
Double-blinding has the potential to improve upon the peer review process, but its previous attempts for implementation have proven underwhelming. Various challenges with its use must be tackled before it can be suitably evaluated for widespread use.
Post-publication review can act as a forum, an online platform whereby researchers can critique published research and the authors can respond. These types of open reviews for online articles is becoming an increasing trend. Maybe post-publication review serves its purpose to supplement existing review processes as a way to further evaluate research quality but also increase reader engagement, interactivity and debate.
Incentive and accountability
These go hand-in-hand. Put simply, reward those who produce high quality, fair reviews and establish responsibility for unfair reviews. Accountability is important to reduce the number of low quality peer reviews as well as the number of poor articles being published. Making reviewers publicly responsible could discourage rushed or ill-informed work, improving the standards across research.
The BMJ rewards reviewers with certificates on request, annual publication of recognition as well as discounts. I think this is fantastic, but more should be done to show appreciation for review work across the board. Relevant reviewers are always scarce and difficult to obtain and providing tangible incentives for reviewers could help to combat this issue. Not only could this make the journal editors’ life easier, but it could speed up the peer review process. With the rise of social media and online activity, perhaps peer review should also hop on-board. Much like Amazon or eBay, reviewers would be encouraged to construct a reputable online ‘reviewer image’, ensuring they are recognised for their work and allowing editors to pick the most suitable reviewers.
Peer review has been a foundation of scientific progression for years. Nevertheless, it has failed to evolve with the fast-moving world of research. The task of updating the review process is an important, yet complex one, and the debate surrounding this adaptation is very much alive.