How Multiverse Analysis Adds to the Benefits of Viewpoint Diversity
For decades, academics have lived with a quiet paradox: modern datasets are complex and rich, comprising variables that can be measured, transformed, combined, or excluded in many reasonable ways. On the other hand, the typical academic paper presents a single, polished set of results, as if the analysis followed one best (if not inevitable) path from data to conclusion. Or, alternative specifications are consigned to seldom-read appendices. The gap between those two realities has always existed, but only recently have we gained the tools to confront it directly.
Thanks to dramatic advances in computing power, researchers can now run not just one model, but thousands (or even millions!) of models on the same dataset. Each model reflects a plausible analytical choice: different operationalizations, different control variables, different inclusion criteria, different statistical specifications, etc.
Many pathologies of modern academic publishing stem from structural incentives of the publish-or-perish culture of academia that rewards novelty and statistical significance in publications, consequently encouraging researchers to engage in p-hacking, selective reporting, or underpowered study designs that inflate the rate of false positives in the literature. At the same time, confirmation bias operates at the individual level, as researchers who have built careers around particular theories or frameworks tend to design studies, interpret evidence, and select publication outlets in ways that reinforce their preferred ideologies and outcomes.
Instead of hiding the forks in the road chosen by researchers, multiverse analysis lays them out in full view, while discouraging researchers from only presenting their preferred viewpoints in academic communication. This methodological shift has important implications for how we think about disagreement, consensus, and viewpoint diversity in academia.
A recent book by Cristobal Young and Erin Cumberworth, Multiverse Analysis, makes the case persuasively. Their central message is simple but unsettling: most research findings are conditional on a web of choices made by scholars that are rarely visible to readers. Multiverse analyses shed light on these varying pathways. The goal of multiverse analysis is not necessarily to crown a single “true” model or result, but to understand the entire spectrum of potential outcomes across the full landscape of reasonable analyses.
This idea connects directly to Andrew Gelman’s famous metaphor of the “garden of forking paths.” Every empirical project involves many small decisions: how to code variables, which observations to exclude, how to handle missing data, where to draw category boundaries. None of these decisions is necessarily wrong or deceptive. But taken together, they create a vast decision tree. Walk down one path and you may find a strong effect; take another and the effect weakens or disappears. If researchers only report the final path they chose, readers never see “the roads not taken.” Multiverse analysis allows us to map the entire garden of alternatives and possibilities.
One of the most vivid demonstrations of this approach comes from a highly-cited article, “Increasing Transparency Through a Multiverse Analysis.” In their study, the authors examined the effect of fertility on a range of religious/political attitudes, but systematically varied analytic decisions that researchers often treat as trivial, such as the operational definition of high vs. low fertility. These variations yielded a broad distribution of outcomes. Sometimes the empirical estimates of the fertility effect clustered together, suggesting that analytical decisions didn’t affect the results as much. However, other times the estimates diverged substantially, with only a small percentage of analytical choices producing a significant result. In a conventional academic article, reported results are usually confined to a limited set of models and specifications. By contrast, a metaverse analysis instead reveals the full breadth of potential outcomes and perspectives.
Seen through a multiverse lens, academic disagreement starts to look less mysterious. If researchers can arrive at different conclusions by making different, defensible choices, then sharp debates are not necessarily signs of incompetence or conscious bias. Such variability in research findings — even when using the same dataset — can be a natural and even healthy result of different analytical or theoretical assumptions, or navigating different regions of the same analytical multiverse.
Historically, academia had little room to acknowledge this. Journal page limits, slow computation, and professional norms all encouraged researchers to present a single “best” model (which has also led to an unreasonably high proportion of “positive” results in the academic literature). Alternative specifications were relegated to footnotes, or never mentioned at all. Readers were expected to trust that reported results were representative of the broader analytical space. Deferring to famous articles or scholars who established a well-worn path is still often a wise strategy for scholars wishing to pre-empt criticisms from readers and peer reviewers, although such dynamics winnow the breadth of inquiry.
Instead of asking readers to trust conclusions based on limited evidence, multiverse perspectives invite them to reflect. Which findings hold across hundreds of models? Which appear only under narrow conditions? Where is there overwhelming agreement, and where is the evidence genuinely mixed? In a multiverse framework, outlier views are not automatically dismissed. Holding a minority position comes with the opportunity to explain why one subset of models should be privileged over the many others that point elsewhere.
Crucially, multiverse analysis highlights how ideological and academic disagreements may sometimes trace back to surprisingly mundane philosophical or methodological choices. Differences in database construction, variable definitions, or sample restrictions can quietly steer results in diverging directions. Making those choices transparent helps clarify the root causes of what we are actually disagreeing about and the analytical choices underpinning them.
Multiverse analysis also reveals the importance of intellectual humility, nudging us away from treating scientific findings as single point estimates. Instead, results can be perceived as ranges or continuums; spectra of possible answers conditioned on varying reasonable assumptions. That view is more complicated than a clean headline result, but it can be far more honest and accurate. Disagreement is not a failure of science. When made visible, structured, and transparent, it is one of science’s greatest strengths.
Related Articles
Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.