LIVE EVENT: Measuring Campus Expression

Join HxA and FIRE for this live discussion | July 24, 3-4pm ET

Register
Heterodox Academy
Back to Blog
Chaos to Idea Walking on Tightrope min
November 16, 2021
+Steven Zhou
+Campus Policy+Research & Publishing+Academic Careers

Failure Isn’t Really Failure: What Academia Can Learn From Start-Up Culture

Last month, Heterodox Academy explored various barriers to knowledge production, open inquiry, and viewpoint diversity. One perspective, offered by research scientist Dr. Nicole Barbaro, argued that the academic infrastructure itself and the publishing incentives built into the system prevent rapid knowledge production and discourage unpopular findings. Her piece aptly pointed out the many problems with the academic system and how it creates a “publish or perish” culture that discourages knowledge production and instead encourages unethical research practices.

How do we change this deeply ingrained traditional system of contingent research funding, tenure based on publications, and overreliance on top-tier journals when evaluating quality? Barbaro suggested that it can start from the bottom up, with individuals changing their own behavior to “cultivate new systems.”

But at the end of the day, the people at the top who make policy-level decisions on how tenure or research grants are awarded may still primarily value the number of publications in top-tier journals. Until this changes, early-career faculty will find it very difficult to obtain tenure and reach the level of influence required to make widespread changes in the system. Those who refuse to “play the game” are more likely to end up in nonacademic career tracks. This is not to say that nonacademic career tracks are less valuable — in some ways, they might be even more impactful to knowledge production — but it acknowledges the inherent difficulty of changing how tenure is awarded when one is not in a position high enough to affect changes in hiring policies.

As Heterodox Academy moves into its new theme on overcoming these barriers to knowledge production, I’d like to propose one way academia can address the barrier of publishing incentives found in the academic system. Drawing from my experience working in a start-up, I believe that we in academia — from graduate students to early career scholars, to late-career tenured faculty — can learn to adopt a start-up and entrepreneurial culture and attitude toward research.

Embracing a start-up and entrepreneurial culture

I realize that this is a bit counterintuitive. Academia is, in many ways, the opposite of a start-up. Higher education is generally known to be slower in responding to events or crises, more bureaucratic, and more risk-averse. Most colleges and universities are nonprofits, and they stand in stark contrast to the for-profit, high-risk, constantly changing world of businesses and start-ups. The goal in academia is to answer important research questions, not to invent a new product or service and make a profit from its sales. Some people in academia might even balk at the idea of treating higher ed as a start-up, fearing the consequences of the privatization of higher ed.

But as Dr. Ilana Horwitz suggested last summer, successful academics already behave more like start-up entrepreneurs than one might think. The knowledge being produced in a research lab is the product or service, and it’s “sold” to customers made up of other faculty, staff, students, and practitioners. We cannot let it sit idly by — we share it, suggest it, and put it out in the hopes that others trade it for a good they possess.

Of course, academics aren’t paid directly for their research articles — many actually have to front the money to submit or publish in the first place — but the currency in academia is citations. A successful research article brings in citations from future researchers, and this currency in number of citations is what academics like myself bid for when marketing, advocating for, and networking to share our research. The number of citations, just like the number of publications, go directly toward a professor’s evaluation for tenure and promotion.

Unfortunately, the skills to market our ideas are not emphasized or taught enough in academia, especially in graduate student training. The focus is usually on advanced writing and analytics, not on social media, networking, and sales. Students aren’t getting the training needed to engage with a small but active crowd on “academic Twitter” and similar spheres, where research can gain a wider audience and generate more collaborators, more citations, and more future research opportunities.

But the disconnects between how we’re trained to succeed and how the system works run even deeper: Just like the rest of the academic bureaucracy, researchers are often trained to work slowly and steadily on large-scale, long-term research projects that can take years to complete (let alone publish). Especially given the rapid advances in technology and methodology, by the time a research project reaches completion, it’s most likely already outdated. Start-ups can’t afford such delays. Even a few months’ wait can mean the difference between capturing sufficient market share or collapsing.

So this is my first pitch: Academics should train, and be trained, to think more like entrepreneurs and drive an entrepreneurial culture within higher education, characterized by speed and efficiency, a willingness to take risks, and a focus on attracting more “customers” (i.e., consumers of the research).

Research as a “pebble in the pond”

At this point, academics might respond by saying, “Yes, but doesn't that careful and lengthy process ensure the quality of our results?”

To answer that question, I'll prompt a different one: Why can start-ups afford to prioritize speed and take risks? After all, 90% of start-ups fail, so why take such a huge risk and why not slow down to make sure you will succeed before launching?

There’s a mantra in start-ups that failure isn’t really failure — it’s just a mistake that you learn from and try again. A start-up that collapses, a new product that doesn’t take off, and a service that ends up being unhelpful aren’t failures — these are just exposing opportunities for yet another new start-up, product, or service.

But in academia, we have this years-long process of publishing, with reviewers picking apart every little detail in a study, all designed to ensure utmost quality in a study. I argue that this process belies an underlying belief that a published paper should never “fail”. Every new article feels like it should have the final truth about a topic, proof of a groundbreaking new theory, or a life-changing discovery. It doesn’t help that popular media often cites scientific research as if it’s the undoubtable truth on a subject, sometimes even ignoring or misstating the flaws and limitations of the study.

But as quantitative methods researchers, my peers and I are all too aware of the abundance of “researcher degrees of freedom” in the process of conducting and analyzing a study. Some are innocent and some more malicious, but all of these subtle subjective decisions that researchers can make eventually lead to potential errors and flaws in any research study, even the top ones. In other words, no study is perfect, yet we’re trained to make them appear as flawless as possible.

Instead of putting so much weight onto perfecting a single study, why can’t we treat each study with less weight, like a start-up treats a new product? The study doesn’t need to be perfect, and “failure” of the study (for example, if the study fails to hold up to future scrutiny) doesn’t need to be career-ending. Each study is an attempt at figuring out the answer to a particular research question. Every attempt is flawed, but the collection of flawed studies over time will slowly point toward where the answer truly is. Even in non-STEM disciplines, single essays too often bear the weight of redefining an entire corpus, instead of each publication being seen as a flawed attempt to add a new interpretation to a particular text.

To be clear, I am not advocating for no quality controls. In addition, some fields of study are certainly more risky than others. I would not want a research study on a new medical drug to be published without a high degree of certainty in its results.

But especially in my field of social sciences, we shouldn’t expect a single research study, even if it sampled hundreds of thousands of people across dozens of countries, to definitively prove how interpersonal conflict emerges and can be resolved. And in that sense, the researchers (and journal editors!) shouldn’t be feeling like this one study must be so airtight that there are few, if any, criticisms.

In some ways, it’s like the difference between writing an op-ed and a research study. An op-ed offers an opinion, and even though it should be thoroughly researched and defended, it is ultimately still an opinion that is open for debate. Most research studies should be thought of in a similar fashion: They are ultimately still evidence-based “opinions” of a research team, incrementally publishing evidence toward a potential solution or answer, but still open for ample debate and potential rebuttal studies.

Moreover, by viewing research as incremental, we might be able to turn the tide against these perverse publication incentives discussed earlier. The pressure to publish as much as possible, and the high barriers set by many journals and editors, incentivize researchers to do whatever it takes — sometimes even going so far as to make up data — to present their research as close to perfect and irrefutable. Another side effect is the “file drawer issue”: The majority of research ends up unpublished, because it is just not quite “good enough” or doesn’t have interesting results.

In fact, ironically, these same publishing incentives are creating areas where there are no quality controls. Catering to researchers who will do anything to increase their publication count, journals are now available where one can pay to publish without any formal review process — and most readers, even academics who aren’t specialists in that area, don’t know the difference. Even worse, I’ve seen some websites selling authorship: Researchers can pay to have their name added to an article submitted to a good journal.

We need a good in-between: Maintain quality control, but do not expect a single research study to be perfect and irrefutable. We can lower the bar to publish — responsibly, where feasible across the disciplines — by embracing a culture of seeing research as incremental and a “pebble in the pond” (to use a term from the Quantitude podcast) that adds consecutive evidence to a research topic. Doing so could disincentivize perverse research practices driven by the need to publish more, while increasing the opportunity for knowledge production.

The answer to a research question or philosophical query is never found in a single study. It’s found only after dozens or hundreds of studies and essays, some even contradicting one another, add up to produce a holistic picture. There are some attempts to take this approach: metaBUS, for example, collects and synthesizes over 1 million (and growing) research studies in the social sciences so that users can draw conclusions from a collection of studies on a given topic, as opposed to single studies. We need to give up this insane incentivizing that drives every scholar to try to change the world in 10,000 words.

Perfectionism is the enemy

When I was working in a start-up, one of the most important lessons I learned was an 80% rule: Once a system or plan is 80% ready, go ahead and launch it; you can fix the remaining 20% later.

Such a philosophy doesn’t exist in academia. The current publishing practice sends a paper through a half dozen revisions and almost five years of work, uncovering every little flaw to get it to be “100%” and ready for publication.

Of course, the 80% rule has its flaws — some things need to be thoroughly vetted before launching — but the principle that perfectionism is the enemy stands true. As academics, we need to learn from the antiperfectionism culture of start-ups and entrepreneurialism. With this culture shift, articles wouldn’t be put through a highly inefficient and difficult review process only to be rejected over a minor issue that might limit the study’s generalizability.

Doing so can enable us, both early-career scholars and late-career editors or administrators, to pursue incremental research and to do so swiftly and with the ability to take some risks.

Moreover, I think this focus on a culture shift — as opposed to refusing to “play the academic game” — might be better suited to early-career researchers working toward tenure. Changing the rules of the game (i.e., the publication requirements for tenure) doesn’t work unless the people in charge of making tenure decisions allow the rules to change. Instead, pursuing a culture shift can start from those of us just beginning our academic careers, because I’m not advocating for less focus on research and publishing. If anything, taking an entrepreneurial approach to research probably will lead to more output and knowledge production (isn’t that the goal?).

Roughly 4 million new start-ups are founded in the U.S. each year. Currently, there are about 2 million new research articles in 30,000 journals published worldwide each year, but less than 30% of these journals are considered to be top tier or high quality.

To drive forward knowledge production, we need more research being published at the top-tier level, and less in lower-quality journals. And to remove barriers to publishing in the top journals, perhaps we need to take more of an entrepreneurial mentality to putting ideas out into the wild, generating interest in them, and seeing if they survive the test of time.

Share:

Get HxA In Your Inbox

Hx A June8215of246
Make a Donation

Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.

This site use cookies.

To better improve your site experience, we collect some data. To see what types of information we collect, read our Cookie Policy.