A quick note: This article is written for the general public, not for researchers, who are likely already familiar with the system’s nuances.
In the middle of debates, you’ll often see people throwing around the term “peer-reviewed” as if it’s some sort of magical stamp of approval. “The study I cited was peer-reviewed,” they say. But, what does that really mean?
Contrary to popular belief, peer review is not a guarantee of truth. In fact, it’s a process that is often flawed and biased. That is not to say that peer review is not useful, far from it. However, it’s much less reliable than what many people think. If your image of “peer-reviewed” involves a team of experts meticulously replicating a study before it’s published, you might be surprised.
At its core, peer review simply means that a scientific manuscript has been read and critiqued by other experts in the field. Ideally, this process should be conducted anonymously to prevent any bias. During this process, reviewers assess the study’s methodology, the consistency of its arguments, its clarity, and its overall significance. They’re looking for obvious flaws, glaring inconsistencies, or ethical breaches. Think of them as gatekeepers, whose main job is to ensure a paper isn’t complete nonsense.
“Wait, you mean they don’t replicate the study?”
Well, no. It can be done, but practically speaking, it’s almost never done. Replicating a study is a time-consuming and expensive process. It requires a lot of resources, and even then, it’s not guaranteed to be successful. For example, if your replication study failed, is it because the original study was flawed, or because your replication study was flawed? It opens up a whole new can of worms. Hence, most studies are not replicated, at least not by the process of peer review. This means a paper can be peer-reviewed and published, yet still contains errors that only come to light much later when other scientists attempt to build on the work.
Furthermore, the system is also constrained by the number of papers submitted to journals. Reviewers typically don’t get paid for their work, and they have a limited amount of time to review the paper. They might have to review dozens of papers per week amidst their already chaotic schedule of teaching, research, and other administrative duties. Since this work is voluntary, it’s no surprise the quality of review can be inconsistent. Reviewers often have little direct incentive to dedicate hours of unpaid effort to scrutinizing a paper.
To illustrate, in 2017, a neuroscientist under the pseudonym of Neuroskeptic, managed to dupe four journals by submitting four papers that were complete nonsense. It was titled “Mitochondria: Structure, Function, and Clinical Relevance”. While the title looks okay, the content is filled with gibberish, such as: “Midi-chlorians are microscopic life-forms that reside in all living cells—without the midi-chlorians, life couldn’t exist, and we’d have no knowledge of the force. Midichlorial disorders often erupt as brain diseases, such as autism.” You can read more about it here. It’s important to note, however, that not all peer review is created equal. The journals duped in that hoax are known for lower standards, often operating on a model where authors pay a fee for publication. In contrast, top-tier journals have more rigorous, multi-stage review processes and their rejection rates are significantly higher. While even these prestigious journals can make mistakes, their process is designed to be far more thorough.
Nonetheless, despite these very pressing issues, peer review is still the best system we have to ensure the quality of scientific research. It acts like a sanity check, preventing vast amounts of junk or even nonsensical research from being published. For every flawed paper that slips through, many more are rejected or significantly improved by diligent reviewers. It’s not perfect, but it’s better than nothing. Of course, there are ways to improve the system. For example, we can incentivise and recognise the work of reviewers, either by formal recognition, giving out certificates, or even better: monetary rewards. Recently, there has been an increasing trend of journals doing open peer review too, hence holding them accountable to the public. One such example is a paper titled “From tumors to species: a SCANDAL hypothesis”.
So, what’s the takeaway? Don’t blindly trust a study just because it’s “peer-reviewed,” even if it’s from a reputable journal. As a reader, maintain a healthy skepticism. This is also why it’s wise to avoid citing studies far outside your own field of expertise. Spotting the subtle, but critical flaws is a skill that takes years to develop by immersing yourself in the field.