Managing the Metrics of Academic Publishing
A new paper in Social Science Information argues that the metrics of academic publication, intended to measure — and reward — academic performance, do nothing of the sort. Such universal measures as the citation index and the journal impact factor no longer measure performance, but rather the ability to game the system. Even university administrators and government policy makers are attached to measures they can use without any substantive knowledge of what is being measured. All actors in academic publication — individual academics, editors, publishers, universities — game to create the metrics that will bring them the most individual benefit. Gaming is not an option in higher education; manipulating publication metrics is key to being competitive and is universal.
While metrics were originally intended to measure academic performance, they have now become the primary purpose of academic publication, an end in themselves. The content of journal papers is rapidly becoming irrelevant, not least because content is hard to measure and harder still to game, and metrics have proved extraordinarily easy. For too long, this has been seen as a problem of measurement not capturing the scholarship in academic papers. The real problem is that scholarship makes little contribution to measures of academic performance and is even likely to be an obstacle to achieving high scores.
A Growing Problem
Many forms of gaming are — like white lies — considered mild and are scarcely noticed: authors self-citing more than ever, editors pleased when authors cite more of the journal’s own papers, universities congratulating employees for publishing in top journals, authors making the most of their research by spinning one paper into three or four, social and other media coverage emphasizing the impact of a group’s research, new research collaborations bringing a proliferation of coauthors, each likely to self-cite, and so on. But this is amateur stuff, now overtaken by a more sophisticated and more extreme variety.
It has become normal for editors to coerce authors into citing their journal’s papers on pain of rejection; for universities to pay publication bounties, sometimes well in excess of salaries; and for authorship to be extended to any name that will contribute to a paper’s metrics. The average social science paper today has more than five times as many references as its equivalent in the 1960s. This is modern gaming, and it has nothing to do with the creation of knowledge. Roughly a quarter of citations in top journals turn out to be wrong. Even when citations aren’t clearly incorrect, authors are unfamiliar with about half of the works they cite.
Concerted efforts to maximize the metrics have had a number of deleterious effects on the quality and character of published research. Journal papers have become formulaic, designed to fit citation requirements rather than to say anything new, or perhaps anything at all. Many are never intended to be read, and some are quite unreadable. Reading has become an alternative to writing, to be reduced by reliance on abstracts. As Academic.edu cheerily observed in a personal email last December, “You could have saved 19,402 minutes by using Summaries.”
Many academic papers are fulsome in their praise of established thinking. Research that appears to be novel or risky or niche or — God help us — critical of prevailing views is eschewed because it does not fit within the mainstream and is therefore unlikely to be cited. Fundamental to gaming is writing about the old hat and avoiding anything like genuinely radical research, or even research at all. Vacuous “water is wet” papers that can be cited just about anywhere in support of just about anything are highly valued.
The Lesson from Medicine
No discipline provides a better example of extreme gaming than medicine. Authorship of papers published in medical journals is now quite divorced from the writing of these papers. Bylines have instead become a matter of entitlement based on customary rights, social priority, and commercial requirements. By manipulating authorship, stakeholders can enjoy the benefits of publishing a paper detached from any responsibility for actually producing it. Author lists in medical journals can be dozens of names in length, occasionally thousands, and much longer than the paper itself. Even then, an author list may not contain the name of anyone who actually wrote the paper.
Pharmaceutical companies typically organize research from laboratories to market the resulting publications. Papers are often ghost-written by medical information companies (MICs) to the specifications of drug companies and carefully tailored to the citation requirements of top journals. Prominent academics are paid to put their names on papers they may never even have seen in order to lend academic credibility to corporate-sponsored research. Strategic positioning by the MIC of a whole raft of papers in a careful selection of leading journals can rapidly establish a dominant disciplinary view. Papers promoting this view attract citation, which makes them much sought by editors and publishers. Inexorably, orthodoxy is established and then maintained by a peer-review system resistant to any change not supported by orthodoxy.
This is extreme gaming by any standards. In 2013, Brian Nosek concluded in the Economist, that “there is no cost to getting things wrong. The cost is not getting them published.” True enough — from the point of view of those who game the system. But there is another and much greater cost, and this is to scholarship. There is a high public cost to getting bad work published — especially in fields like medicine. Those who have edited the BMJ and the Lancet are as one in declaring that a large proportion of papers published in medical journals are disgraceful, and that a major task now facing the discipline is erasing the rubbish from the record.
Changing the System
Medicine warns of a fate that could be awaiting other academic disciplines. Extreme gaming in medical journals is driven by the self-interest of just about all participants and their neglect of the public interest. This perversion of scholarship would be made more difficult by distinguishing between the ethical considerations that guide the production of scholarship for public benefit, and the incentives that offer so much private profit from academic publication.
A clear example of the confusion between public and private costs of academic publication is evident in the approach to plagiarism in all disciplines in higher education. Plagiarism is considered wrong, not primarily because it is an affront to scholarship but because it makes student assessment difficult. Thus, Turnitin.com, a company that claims copyright over every essay it checks, is seen as a solution to plagiarism because the problem itself is denied an ethical dimension. Likewise, the notion of scholars standing on the shoulders of giants has become unfamiliar: Students and academics alike are driven to seek personal benefit from their efforts, not to contribute to knowledge. In other industries, regulation is often required to limit antisocial behavior and reduce costs to the public. Self-regulation of the academic publishing industry has simply bent the rules, and independent regulation now seems warranted.
A start would be an admission that the metrics used to measure academic performance do no such thing, in small part because the measures have become the end rather than a measure of the performance, and in large part because the metrics are universally gamed. Amateur gaming is inexorably being replaced by the sophisticated gaming that has reached such extreme levels in medicine. When no one knows who has written a paper, except that it is unlikely to have been any of the paper’s listed authors, this is extreme gaming, as it is when authorship slots are openly sold and bought, or when publishers sell authors such MIC functions as tailoring papers to fit the requirements of their own journals and of the market beyond. The public good is no longer served by castigating essay mills and predatory publishers when very similar services are provided by “legitimate” academic publishers. The same legitimate publishers are now relieving journal editors of responsibility for peer review, ensuring that the stamp of respectability extends to even the extreme gaming that makes their business so indecently profitable. The more a paper is cited, the higher the impact factor of the journal in which it appears; the higher the impact factor, the more the publisher can charge to publish a paper — which costs the publisher almost nothing to produce. The Devil himself could hardly have devised a more self-serving business strategy.
Peer review — its performance for too long excused by the democracy defense (that everything else is worse) — is in tatters. It venerates the orthodox and does little more than sift the most citable wheat from the less citable chaff. But even in its decrepitude, peer review is paraded by the establishment of academic publishing — its publishers, its editors, even its nonwriting authors — as the safeguard of scholarship. The camouflage of scholarship serves to hide the corruption of those who never fail to refer to one another as scholars.
A morass of high-scoring dross resulting from gaming is unlikely to be unique to medicine and probably threatens scholarship in other disciplines. Purging much more ruthless than a few retractions is required: purging of the excrescence of the publish-or-perish age and its performance metrics, purging that tears aside the cloak of scholarship that hides the diversion of public property to private purpose. An independent regulator might just succeed where academic self-regulation has patently failed.
For much more on the extent and consequences of gaming in academic research, the full paper is available here (though behind the publisher’s paywall).
Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.