University faculty and administrators make considerable noise about “interdisciplinary” work. I have never met anyone who said that she opposes interdisciplinary scholarship in principle, and plaudits generally await those who cross-pollinate their research with research in other fields. In practice, however, crossing disciplinary boundaries too often amounts to invasion rather than collaboration: we use or critique other disciplines to our own ends. Seldom do we meet practitioners in other fields as equals. The god-term “interdisciplinarity” serves as a cloak for provincialism. Indeed, for all the handwaving at cross-disciplinary projects, universities continue to fetishize specialization.
In his most recent book, Range: How Generalists Triumph in a Specialized World, David Epstein explains lucidly and with voluminous research why generalists thrive in an era of increasing specialization. Epstein, a former Sports Illustrated writer, starts his book with deceptively simple stories about golf and tennis. Tiger Woods played golf on national television at age two, impressing (among others) Bob Hope; he won a ten-and-under tournament later the same year. For his part, Roger Federer “dabbled” in a dozen different sports, cultivating his “athleticism and hand-eye coordination” in diverse arenas before concentrating on tennis, while many of his friends focused on tennis much earlier (3). Despite the different paths they took, Woods and Federer both conquered their fields. It turns out that golf, chess, and even, counterintuitively, firefighting, provide “kind learning environments”: patterns emerge in these domains that a single-minded dedication helps to master. Most sports, however—indeed most skills—are nothing like golf and chess. The dogged pursuit of a single enterprise, the 10,000 hours principle, works only in a thin band of the skills spectrum. Learning environments in the real world tend to be “wicked” — so narrow specialization does not lead to mastery and can even lead us astray. It is not encouraging that the best time to be admitted to the ER with a heart condition is during a national convention of cardiologists—when, that is, the conference is in another city (12; 266).
Specialization and the Job Market
In a college setting, the rationale for early specialization hinges on jobs. Especially in the wake of the Great Recession of 2008, families have been reluctant to spend tens of thousands of dollars on a university degree unless students attain marketable skills. Parents don’t want their children living in their basement five years after graduation. Yet, as a solution to the problem of employment, specialization is misconceived. As automation threatens jobs, specialist skills will prove particularly vulnerable. Intellectual labor is no more immune to automation than manual labor. Take the case of software engineers. Silicon Valley outsourced many such jobs to India, propelling the expansion of the Indian middle class. Recently, however, IT companies in India have laid off software engineers in droves, as these firms increasingly need those who specialize in more advanced technologies such as “artificial intelligence, cloud , big data analytics, [and] robotic process automation.” The twist, however, is that AI can now program software, and it is conceivable that AI will ultimately design software as well. Engineers working on artificial intelligence may be creating their own digital replacements.
Even if the current forecasts of a “robot revolution” are overdrawn, a nightmare vision fueled by science fiction, intelligent machines have improved their performance on specialized tasks at an extraordinary rate. The good news is that while AI has proven adept at relatively narrow skills, it does not (yet) have nearly the range of human intelligence. Epstein provides a telling example. Even in a kind environment like chess, amateur players with ordinary computers at their disposal have soundly beaten grandmasters as well as advanced chess software. Computers outperform humans on a tactical level, but humans outstrip most chess programs in long-term strategy (22-23). Computer science majors will be relieved to learn that programming too benefits when humans and computers join forces. Such “centaur-like” skills will be in demand for the foreseeable future.
To worried parents: in one study that Epstein cites, researchers concluded that while specialists did earn a somewhat better salary than generalists immediately after college, “later specializers made up for the head start by finding work that better fit their skills and personalities” (9). The cult of the now, nurtured by social media, has made us all—young and old alike—impatient and purblind. There are, however, encouraging signs: even a behemoth like the United States military is learning to adapt, replacing a rigid system of early specialization for West Point graduates with a wider range of career choices (137-40). The U.S. Army slowly recognized that college students don’t always know what they want to do for the rest of their lives. Academic institutions should strive to be at least as flexible as the U.S. military.
Innovation, Science and the Humanities
James Flynn, renowned for his discovery of the “Flynn effect,” in which IQ scores increase from generation to generation, laments the insularity of most academic majors. Flynn designed a test for critical thinking and gave it to students in various disciplines. Most students proved unable to apply critical thinking skills outside their own narrowly delimited field. Economics majors fared the best; students in business, biology, psychology, English, and neuroscience floundered when they encountered questions outside their bailiwick. As most students end up in jobs not directly related to their majors, such limitations pose a serious problem (47-51).
Innovation usually flows from a confluence of disparate knowledge banks. Take Claude Shannon, the electrical engineer who helped to usher in the Information Age. To fulfill an undergraduate area requirement, Shannon took a philosophy course in which he learned Boolean logic; during a summer internship at AT&T Bell Labs, “he recognized that he could combine telephone routing-technology with Boole’s logic system to encode and transmit any type of information electronically. It was the fundamental insight on which computers rely. ‘It just happened that no one else was familiar with both those fields at the same time,’ Shannon said” (33-34). If Shannon had not taken a philosophy course in college, the history of communications would have had a different trajectory. As faculty and administrators redesign curricula to equip students with practical skills, they should keep Shannon’s example firmly in mind—as Epstein documents in detail, it is scarcely unique. Unfortunately, much university scholarship now suffers from the same blinkered outlook as vocational pedagogy. As Epstein observes, “[t]here is a growth industry of conferences that invite only scientists who work on a single specific microorganism” (278). Broader research areas also break down into cells.
I recently attended a conference in Edinburgh, Scotland, on the Enlightenment. In a roundtable on systems of knowledge, one half of the room talked past the other even though most in attendance had valuable insights.
When a scholar who defended the idea of objective knowledge systems raised the notion of “statistical insignificance,” others, using a deconstructive framework, pounced on the term as evidence that systems marginalize minority populations. Systems do marginalize minority groups: algorithms, for instance, frequently discriminate against minorities. However, “statistical insignificance” or, more properly, “statistical non-significance,” has nothing to do with marginalization. That highly intelligent, well-educated academics did not understand even the rudiments of statistical significance is appalling (“Statistical significance” is, to put it mildly, not without problems in the scientific literature: p-hacking has indeed caused a replication crisis, but that was not at issue here).
On the other side of the coin, the scholar who defended knowledge systems referred to information realism in physics and the computational theory of the mind as if they were capital-T Truths that enjoyed wide scientific acceptance. Information realism, however, is highly controversial, and many neuroscientists, who have finally begun to engage with philosophy, reject the computational model of the mind. I assume that this scholar could blithely make such scientific pronouncements because he did not expect to be contradicted in a room full of humanists. C. P. Snow was half-right: we in the humanities can learn a great deal from scientists, but scientists can also learn from their colleagues in the humanities. This will not happen if bodies of knowledge remain stuck in a “system of parallel trenches” (13).
The latest version of the Snow–Leavis debate is the fracas that broke out a few years ago between Steven Pinker and Leon Wieseltier in the pages of the New Republic. Predictably but tellingly, Wieseltier cast Pinker in the role of scientific colonialist. From his beleaguered island in the humanities, Wieseltier rebelled against the scientific metropole, insisting, in the final segment of their debate, that he is “ardently supportive of borders and fences.” And yet Wieseltier, despite the excesses in his rhetoric, was not dueling a straw man: Pinker did at times sound imperialistic. Pinker emphasized that the humanities would benefit from its brush with the sciences—he came to the conversation to teach and not to learn.
I hesitate to propose that academia learn something from the private sector, as it has learned certain market principles only too well, but Epstein supplies illuminating examples of how private industries solve problems by reaching outside their areas of specialization. Central to Epstein’s story is Alph Bingham, who holds a doctorate in organic chemistry. In 2001, Bingham started a company called “InnoCentive,” which serves as a clearinghouse for intractable problems: specialists stumped by wicked puzzles use the company to crowdsource solutions. Bingham discovered that the key to finding fresh answers to persistent questions was attracting “a diverse array of solvers” who could deploy “outside-in thinking” (173). Thus, InnoCentive recruited a chemist based in Illinois, John Davies, to help with the cleanup of the catastrophic Exxon Valdez oil spill on Alaska’s coast. Counterintuitively, Davies abandoned chemistry to solve the problem, relying instead on his earlier career in construction. In 2009, NASA scientists turned to InnoCentive when they ran into difficulties predicting solar particle storms. “Within six months, Bruce Cragin, an engineer retired from Sprint Nextel and living in rural New Hampshire, solved the challenge using radio waves picked up by telescopes” (175). Other organizations have cropped up on the InnoCentive model, including Kaggle, which devotes itself to problems in machine learning. While some academic disciplines exploit the potential of networked information, others remain complacently siloed.
The range of subjects that Epstein takes up is striking: video games, the figlie del coro, the roma virtuoso Django Reinhardt, the Japanese method of collective problem solving called bansho, “desirable difficulties” in an educational context, Kepler’s gift for analogies, space shuttle disasters, Netflix’s recommendation algorithms, and van Gogh’s multifarious pursuits are all grist for his argument. Epstein convincingly demonstrates the importance of “undiscovered public knowledge,” to use Don Swanson’s term (180). If the book has a flaw, it is a Dianetics-like breathlessness about the virtues of range. For instance, do Nobel laureates really owe their success to a range of skills and hobbies, as the author suggests (32-33), or are such people more broadly gifted to begin with? Epstein notes that surgeons get better with monofocal commitment, but a few more examples of the merits of specialization would have made the portrait that he draws rounder and more compelling. To his credit, he acknowledges in the book’s second half that generalists need specialists to flourish—to choose between specialists and generalists is thus a false choice. Nonetheless, we in academia need to design curricula in ways that foster the development of both. Now, the balance is tilted in favor of specialists, creating “intellectual archipelagoes” that hinder progress in all fields of inquiry (278-82).
Arguably, the most important goal of education is to form good citizens. How can we expect people to make informed decisions in the voting booth if they know, at most, one thing well? Beyond the four C’s, an educated citizen should have some understanding of history, literature, evolution, global warming, basic economics, big data, the political system, the impact of new media on society, and at least one foreign language. In sum, good citizenship requires range. Perhaps the term “generalist” is the wrong one, as it conjures an old-fashioned idea of learning, a dated philosophy. I propose the term “academic cosmopolitanism” or “disciplinary cosmopolitanism” in its stead. The merits of academic cosmopolitanism, sometimes dismissed as rarefied, elusive, or naïve in its generalist guise, form a powerful argument for a broad liberal arts education.
Read the book: David Epstein (2019). Range: Why Generalists Triumph in a Specialized World. New York, NY: Riverhead Books.