Academic researchers have grown accustomed to a world in which small cliques dominate certain journals, and with it establish back channels that benefit themselves, their friends, and their students. Hence the (empirically verified) adage that science proceeds one funeral at a time.
In some fields, political posturing often seems indistinguishable from scholarship, unpopular ideas are made to disappear, and researchers who deign to challenge orthodoxy are mobbed. These ill effects have their greatest impact on junior researchers who, instead of being encouraged to challenge conventions, are pressured to seek acceptance — much like a college freshman rushing a fraternity or sorority. The result of the academic hazing process is watered-down research, stifled creativity, and groupthink. Its sociological impact is negative, but its scholarly implications are far worse. Indeed, findings published in the most prestigious scientific journals, including Science and Nature, have been shown to replicate at a rate only slightly better than 50%.
We attribute much of the blame for these issues to the conflicting functions of the current peer review system. On one hand, peer review serves an obvious scholarly function: providing an essential feedback mechanism to sharpen ideas and improve research output. On the other hand, it plays a central administrative function through which researchers garner prestige that helps in their pursuit of grants, promotions, awards, and other forms of recognition.
But just as one cannot serve two masters, peer review cannot serve both roles in equal measure. By the nature of these conflicting functions, what benefits one necessarily obstructs the other. At present, it is peer review’s administrative arm that dominates, twisting an otherwise high-minded pursuit into “a priesthood or a guild” or even a “cult.” This feeds academia’s publish-or-perish culture, generates massive profits for publishers, and reinforces hierarchical structures via a sociological process of “inclusion-exclusion.”
To be clear, some of these administrative functions serve are important. However, such tasks, no matter how necessary, should not take precedence over the scholarly principles on which academic institutions are founded. A successful peer review model must separate the scholarly functions from the administrative functions.
One common objection to a gatekeeper-free peer review model is that it will cause the published literature to degrade. Without a filter, what will stop the literature from being inundated with “veritable sewage”? To this, we have a few replies.
To begin with, frankly, many fields are already inundated with garbage. Recent analyses have found that a majority of efforts in fields like biomedical research may be wasted due to poor experimental design and avoidable errors. In economics, which is perceived by some as the crown jewel of social research, as many as 90% of published studies may be underpowered, and 80% or more of the reported effects may be exaggerated. P-hacking, conflicts of interest, blatant errors, even outright fraud – all make it through peer review with disturbing regularity. It is no wonder that the literature in many fields are so rife with replication failures and false positives. It may even be the case that most published research findings are false.
Astonishingly, when confronted with this record, many simply double-down on calls for more effective gatekeeping. But the gatekeepers themselves are a big part of the problem.
After all, on its own an unreliable finding causes little harm and can even be instructive. The real danger lies in giving readers a false impression that low-quality work has been filtered out when it hasn’t – for instance, by erroneously assigning such studies the “peer-reviewed” seal of quality from a reputable journal.
As we argued in our essay, “In peer review we (don’t) trust,” there is reason to expect that removing the filter will actually improve the quality of published science: by heightening readers’ alertness and the skepticism with which they approach each article, errors in one article are less likely to propagate throughout the literature. Authors can also be expected to raise their own quality standards in anticipation of increased reader skepticism.
Indeed, the concern that literature quality will degrade without stern gatekeeping reflects a cynical attitude — one that regards researchers as self-interested ne’er-do-wells instead of truth-interested scholars. But in fact, it is the current “publish or perish” system that drives many towards scholarly malfeasance and create incentives for scholars to conceal weaknesses in their studies or exaggerate their findings.
The gatekeeping role of peer review allows the simple act of publishing to be regarded as an accomplishment. Because such “accomplishments” — or the lack thereof — are used by administrators for their administrative tasks (e.g., hiring committees ranking applicants), career-oriented researchers are encouraged to game the system for their own benefit. Absent the conferral of credibility associated with placing one’s research in a journal, there would be little to gain by manipulating, exaggerating or otherwise misrepresenting one’s findings.
In a gatekeeper-free model, bad research is no longer amplified in the same way, and subversive activities are no longer rewarded — we can do away with these ill effects by eliminating their common cause. This means removing all barriers to publication, including editorial board gatekeepers and accept/reject decisions. It is toward this objective that we have launched Researchers.One, a peer-review publishing platform that emphasizes discourse, critique, and quality without the administrative conflicts plaguing traditional outlets.
Centering Scholarship and Scholarly Exchange
The defining feature of Researchers.One is its autonomous peer review model, which vests authors with full control of the peer review process, and in the process eliminates many of the social and political forces that plague the current model. Researchers can put forth whatever new ideas or results they have in an open-access platform where readers can, and are encouraged to, provide direct, non-anonymous, constructive feedback.
In this alternative system, authors enjoy authority and autonomy over their work and are free to publish whatever they wish. But without the protection of an editorial board or cachet of journal prestige, authors also bear the responsibility of making sure that their ideas stand up to scrutiny. This dual role of researcher autonomy on Researchers.One, providing freedom but also imposing responsibility, strikes a delicate balance which promotes quality, fairness, and transparency — attributes noticeably absent from the current system. Such autonomy also provides an organic enforcement of viewpoint diversity: without cliques, mobs, or cartels deciding what’s published and what’s not, all viewpoints are offered the opportunity to fend for themselves in a (truly free) marketplace of ideas.
Researchers.One is also perfectly compatible with other concurrent reform initiatives, such as the Registered Reports paradigm. As one of us recently demonstrated (pre-registering an approach for testing the accuracy of election predictions prior to the 2018 midterm elections), researchers can pre-register their analysis by publishing their study design to the platform just as they would submit to a Registered Reports journal. The design can be critiqued and reviewed publicly, providing feedback for improved methods before the replication attempt is carried out. And the results of the replication effort can be published separately, on Researchers.One or elsewhere, once the experiment has been completed. All of this can be done out in the open, with transparent peer review and immediate publication of results.
Science as a Dialogue
A common theme among theories of knowledge put forth by Carnap, Popper, and other philosophers of science is that “truth” can be ascertained only in a limiting or aggregate sense.
If we think of a scientific article as a mini-theory, then rigorous confirmation of that theory can be achieved only by surviving the scrutiny of many readers over a period of time. In this light, the way in which scientific journals evaluate research — a one-time assessment by a small number of reviewers — is unscientific, and has proven unreliable for the purpose of charting scientific progress.
Rather than relying on one-time assessments, which are bound to be affected by bias and short-sightedness, the ideal should be to foster communities in which research is disseminated, discussed, and evaluated organically, as part of an ongoing process of continual refinement. Scientific progress is made in labs and research meetings where questions are asked and ideas are challenged, not in the distilled pages of academic journals. At Researchers.One, our hope is to bring these discussions to the forefront by making the scientific process — with its perpetual tinkering and rethinking — the core of the scholarly discourse.
Harry Crane (@HarryDCrane) is Associate Professor of Statistics at Rutgers University and Co-Founder of Researchers.One
Ryan Martin (@statsmartin) is Associate Professor Statistics at North Carolina State University and Co-Founder of Researchers.One
As an organization that prizes pluralism and disagreement — with more than 2,500 members holding diverse views on most issues — Heterodox Academy almost never takes positions as an organization on current events and controversies. Opinions expressed here are those of the author(s). Publication does not imply endorsement by Heterodox Academy or any of its members. We encourage readers to follow us on Facebook, Twitter and LinkedIn — and to join in the conversation on those forums — to weigh in on this or other posts.