LIVE EVENT: Measuring Campus Expression

Join HxA and FIRE for this live discussion | July 24, 3-4pm ET

Register
Heterodox Academy
Back to Blog
39 campus speech
September 7, 2018
+Musa al-Gharbi
+Campus Climate

Vox’s Consistent Errors on Campus Speech, Continued

Let me start by saying that in some respects, it is a strange debate between Beauchamp, Yglesias and I:

In the highly-polarized political environment in which we find ourselves, it seems to be a standard assumption that if someone is criticizing one position, it must be because they personally hold the opposite view themselves. For instance, if I am criticizing Beauchamp and Yglesias’ essays “proving” there is no speech crisis, it must be because I believe there is one, right?

Yet I close my recent essay, “Vox’s Consistent Errors on Campus Speech, Explained,” as follows:

“Beauchamp and Yglesias’ essays don’t provide meaningful evidence (let alone knock down proof) that there is no free speech crisis at universities. But of course, this does not entail that there actually is such a crisis. In order to even have that debate, there has to be clarity on what is meant by ‘crisis’ to begin with: a crisis in what sense? For whom? In virtue of what? It often seems as though people are talking past one-another on this point.”

Beauchamp and Yglesias insist that the burden of proof is on those who declare there is a crisis. I happen to share this conviction.

San Diego State University psychologist Jean Twenge, NYU social psychologist Jonathan Haidt, HxA Research Director Sean Stevens, FIRE President Greg Lukianoff, sociologists Bradley Campbell and Jason Manning, and others have responded to this challenge by offering compelling – albeit preliminary — evidence that there is a significant normative shift underway among contemporary young people with regards to free expression and other issues (here, here, here, here, here, here, here).

However, in my personal view, more (and different) evidence needs to be marshalled by proponents in order for their case to be fully persuasive. And further research is being done — both within Heterodox Academy and beyond. In the meantime, my position, as I stated in the initial piece, is that the jury is still out on the extent of any normative change – but it is probably unhelpful to refer to it as a “crisis” in any case.

In short, I have no issue with Yglesias and Beauchamp’s skepticism regarding the campus free speech “crisis.” The problem I have is with the specific evidence they attempted to deploy to “prove” there is no crisis.

Specifically, I argued that Yglesias failed to control for straightforward confounds in his analysis of the GSS data – and when these are controlled for, it seems like contemporary students may actually be less tolerant of those they disagree with than previous cohorts. But don’t just take my word for it: political scientists April Kelly-Woessner(Elizabethtown College) and John Sides (George Washington University) have also published essays underscoring this point using the same GSS data that Yglesias relied on.

But I also cautioned that, across the board, the General Social Survey provides (at best) weak evidence with regards to this dispute. Why? Because the GSS has such a small sample of college students in any given year that it would be irresponsible to generalize much from it. In 2016, for instance, they had roughly 32 enrolled students who fell within the “iGen” age group (which is the cohort which Twenge et al. have argued hold different values on free speech — the position Yglesias seems to be taking himself to refute).

Obviously, one cannot make sound claims about the millions of iGen students and their values on the basis of surveying a few dozen of them — but this the best the GSS can muster right now. In other words, even if Yglesias’ analysis didn’t suffer from important confounds (which it does), the data he relied on could not rebut, or even meaningfully speak to, the claims of those who argue that there is a major cohort change with iGen students on free speech (e.g. Haidt, Lukanioff, Twenge, Campbell, Manning, Stevens).

Yet Yglesias drew a very strong conclusion (“Everything we think about the ‘free speech crisis’ is wrong”) on the basis of this very weak data.

For Beauchamp, I argued that he misrepresented data from all the sources he cited in a recent Vox report. I focused on two sources, which occupied the bulk of his essay: the Free Speech Project database, and a database on faculty firings by Acadia University political scientist Jeffrey Sachs.

In his attempted rebuttal, “The myth of a campus free speech crisis,” Beauchamp flagged that there were actually three sources he ostensibly relied upon: in addition to the Free Speech Project and Sachs’ database on faculty firings, he also cited FIRE’s disinvitation database.

Fair enough.

But ironically, this correction only makes Beauchamp’s problem worse. Allow me to briefly walk through the three sources Beauchamp cited, why his description of their findings was problematic, and why his “rebuttal” fails to resolve any of my core criticisms (click to expand):

Free Speech Project (FSP)

To review, my core criticisms of Beauchamp viz. the FSP data:

  1. The FSP data is so preliminary and incomplete that it cannot yet effectively speak to the overall prevalence of these incidents – which is what Beauchamp’s story was about. And so, while the FSP project is fantastic on its own merits, it is inappropriate to try to use this data for the kind of case Beauchamp was trying to make. Dr. Ungar’s essay said absolutely nothing about overall prevalence.
  2. Beauchamp’s portrayal of the “free speech crisis” on the basis of the FSP data seemed to be far out of touch with Dr. Ungar’s own view on the matter on the basis of this same data. This divergence, I argued, was a product of the issues in criticism 1: Dr. Ungar came to a different conclusion, not because of some major ideological difference — but because he had a better understanding of his data (and its limits) than Beauchamp seemed to.

In the attempted Vox rebuttal, the second criticism is validated: It is acknowledged that Dr. Ungar sees a more serious problem than one might have gathered from Beauchamp’s initial essay.

“[Dr. Ungar] is certainly not as much of a skeptic about the free speech ‘crisis’ as I am — he believes that there is a real problem, particularly for university administrators who are terrified of a high-profile incident happening at their campus, and that there is ‘evidence’ that speech is ‘being suppressed’ in certain instances.”

With this established, we can occupy ourselves primarily with point 1. But first, it turns out Beauchamp actually made another error in his FSP discussion, which I originally missed, but is relevant here. The original opening language of the essay ran:

Entire books and online magazines are premised on the idea that political correctness is sweeping the American university, threatening both higher education and the broader right to free speech. But a brand new data analysis from Georgetown University’s Free Speech Project suggests that this ‘crisis’ is more than a little overblown.”

Extending basic intellectual charity to Beauchamp, I did not really scrutinize the metadata on the original FSP Medium post to make sure that it really was a “brand new analysis.” Turns out, it wasn’t. The FSP essay that Beauchamp relied on was actually published in March 2018, nearly five months before Beauchamp published his piece (and I wrote my initial rejoinder).

In other words, Beauchamp was not relying on current data from the FSP at the time he composed his essay – and he did not disclose this fact in his essay. This raises a couple of possibilities, both unsettling:

  1. Beauchamp was not aware that he was using out-of-date information because he neglected to look at the publication date on the FSP post before burning off an essay about it (suggested by the “brand new analysis” verbiage) – and also neglected to go to the actual FSP website to explore the data prior to publication (which would have given him current numbers). This would underscore my point about failing to exercise due diligence. OR
  2. Beauchamp was aware that Dr. Ungar’s essay was published in March, but nonetheless framed Dr. Ungar’s analysis “brand new” and also declined to report the current data in his own essay – either out of negligence (i.e. he didn’t want to bother with basic research), or because the more current information was less convenient for the point he was arguing. Neither is a good look. And then there is a secondary problem if Beauchamp was aware that Dr. Ungar’s information was actually from March (and hence, likely out of date): why did he fail to disclose this fact – and instead present the analysis as “brand new”?

Beauchamp recently issued a correction acknowledging that the “brand new” analysis he described was actually from March 2018. Yet, in his attempted rebuttal, he makes no mention of this error – instead insisting that his original presentation of the data was completely accurate!

Before we dive into that, notice that both of my core critiques of Beauchamp viz. the FSP have already been vindicated: Beauchamp has issued a correction indicating one way he misrepresented the FSP data. He has also conceded that his presentation of the (non) threat — on the basis of FSP data — was out of step with Dr. Ungar’s own position that there is a serious problem.

Now let’s drill down a little more: Does this new identified error by Beauchamp relate to the criticisms I offered in the initial essay? Yes. Here’s how:

I did work through the actual FSP database in formulating my initial essay, and found that there were more than 90 recorded incidents from campuses at the time, out of a total 137 incidents overall.

If Dr. Ungar’s analysis was “brand new,” yet only focused on 90 incidents (60 from campuses) – this would mean he was only working with a subset of the total incidents in his database — itself, just a small subset of a much larger pool of incidents “in the world.” This was a fair reading, because Dr. Ungar himself refers to the set of incidents he was referring to as a “sample” which was not necessarily representative, etc.

Hence, I assumed that Beauchamp’s error was failing to understand that Dr. Ungar cited just a sample out of the total 137 incidents available in the FSP database as of August 2018 (when Vox covered Ungar’s “brand new” analysis… from March). And while Beauchamp did fail to grasp the nature of the FSP data (or he would never have used it to make bold claims about the overall prevalence of incidents) – in addition to this, he was also working with information that was several months out-of-date, and (intentionally or not) misled his readers (myself included) on this point. This is apparently why he was insisted there were 60 incidents in the FSP data, instead of the 90 he could have easily retrieved through basic research prior to his article’s publication.

Now, it is striking that the number of campus incidents in the FSP database grew from sixty to ninety just between Dr. Ungar’s original Mediumpost and Beauchamp’s original Vox essay –this amounts to a 50% increase. And as Beauchamp himself noted in his attempted rebuttal, the number has grown further still since my rejoinder (published just a couple weeks ago). I explicitly predicted this would happen, and it underscores the problem with trying to use Dr. Ungar’s data to speak to the overall prevalence of these incidents:

For the sake of argument, let’s run with Beauchamp’s presumption that the number of incidents in the FSP database actually is reflective of the total incidents nationwide. Well, if there were “roughly only 60 incidents in the last two years” as of March… and by mid-August there were suddenly 90 incidents – then it seems as though the total number of campus incidents nationwide over the last two years actually increased by 50%, just between March and August. This would truly be astonishing, and a cause for concern – especially given that classes were not even in session for most of this period!

By the end of 2018 it is likely that there will be well over a hundred campus incidents in the FSP database from the last two years. Of course, this would not mean that the actual prevalence of incidents increased by nearly 100% (or more) since Dr. Ungar published his original essay. Why not? Because neither the cases Dr. Ungar analyzed in March, nor the number of events FSP will ultimately highlight by December, meaningfully speak to the general prevalence of these incidents at all.

In an attempt to defend himself against my critique, Beauchamp apparently reached out to Dr. Ungar himself. Vox readers were only provided with basically two statements out of what was presumably a conversation of at least several minutes – and even one of these statements was partially redacted through ellipses. Yet, despite the fact that the purpose of this essay (and its predecessor) is to highlight misrepresentations by Beauchamp regarding Dr. Ungar’s Medium essay, let’s just take it on faith that Beauchamp is not conveniently neglecting to share statements from Dr. Ungar which undermine his argument, and that he is faithfully representing the little bit of content from Dr. Ungar that actually made it to the page (redactions notwithstanding).

Beauchamp says he asked directly whether it was appropriate to try comparing the total number of incidents by the total number of universities (apparently bracketing the fact that the data he was relying on was non-exhaustive, non-representative, and out-of-date at time of the Vox publication). Dr. Ungar’s indirect, hedged and polite response was, “I’m not sure I would say you were wrong...”

Now, we can’t know everything Dr. Ungar did say in their conversation, given how little Beauchamp actually included – but we can certainly note some things he apparently did not say:

First, let’s hammer home that he did not say there were “roughly only 60 incidents” between 2016 and March 2018. Hence, Beauchamp’s claim to this effect is simply incorrect.

There were 60 incidents in the FSP database as of March 2018. This is emphatically not the same as saying there were actually “only roughly 60 incidents in the last two years.” A correction is warranted here from Vox. A more accurate version of the relevant sentence could read:

“The fact that the FSP database only showed around 60 incidents (as of March 2018) suggests that free speech crises may be somewhat rare events that don’t define…”

But would the FSP data even suggest this, really? To answer that question, let’s continue exploring statements Dr. Ungar apparently declined to make:

He did not say his original Medium post indicated anything at all about the overall prevalence of incidents; he did not say that it actually was appropriate to draw inferences about overall prevalence from the FSP data, nor did he say al-Gharbi was wrong with regards to the specific criticisms leveled at Beauchamp viz. the FSP. He basically dodged Beauchamp’s question and then shifted to express “delight” that the Free Speech Project received coverage in Vox.

That sort of response speaks for itself… and it does not send the message Beauchamp seems to hope.

Indeed, despite Dr. Ungar’s diplomatic evasion, the Free Speech Project website is quite explicit that the sort of maneuver Beauchamp attempted in his first essay was inappropriate given how preliminary their data are (emphasis mine):

“…the Tracker is a work in progress and should not be considered a complete listing of every instance in which freedom of speech was tested, challenged, or commented upon… As it grows in size and content, it should become a steadily more useful tool for analysis.”

And so, had Beauchamp actually consulted the FSP website when drafting his essay, not only could he have used current data in his original story, as I did in my rejoinder (rather than content from five months prior) — he could have also avoided making inferences about overall incident prevalence on the basis of this data (such as, “there were only roughly 60 incidents in the last two years”), which the FSP explicitly recommends against.

Sachs’ Database on Faculty Firings

Beauchamp represented Sachs’ findings on faculty firings as follows:

“Sachs’s results, published by the left-libertarian Niskanen Center, actually found that left-wing professors were more likely to be dismissed for their speech than conservative ones.”

Sachs’ data do not show this. He himself flagged that his data was misrepresented, taking to social media to share and praise my essay about Vox.

Sachs and I are not too far apart on most of these issues. Hence it is perplexing, in the attempted rebuttal, when Beauchamp makes statements like:

For his part, Sachs sticks by the conclusion of his original piece, which is at odds with al-Gharbi’s contention. ‘I do stand by the claim that there is no campus free speech crisis,’ he told me.

Again, I did not claim there is such a crisis – indeed, in the very essay that Beauchamp is responding to, I explicitly said the evidence of a normative shift is not decisive, and that the “crisis” framing is unhelpful and ill-defined. So it isn’t clear how Sachs is “at odds” with me here.

Moreover, I am in full agreement with Sachs that one should avoid making sweeping claims on the basis of his firings database, given that firings are relatively rare (especially relative to other forms of speech sanction). It was Beauchamp who tried to make strong claims about relative likelihoods viz. firings (“left-wing professors were more likely to be dismissed,” emphasis his) – I merely pointed out that he misrepresented Sachs’ data, and that when better contextualized, his claim that liberal professors are “more likely” to be fired for political speech than conservatives is unsupported. In fact, the opposite seems to be true.

Again, it was Sachs himself, in his Niskanen essay, who noted that given the base-rates of conservative to liberal professors – it may be the case that conservatives are more likely to be fired, despite the fact that most who are terminated are liberals:

“…the professoriate leans significantly to the left as well, so we should expect left-leaning speech to make up the bulk of terminations. As with the skewed findings of FIRE’s Disinvitation Database, we are not talking about a population where political ideology is uniformly distributed. It is possible for liberals to constitute the majority of faculty terminations and also for conservatives to be terminated at an equal or higher rate.” (Sachs’ own emphasis)

I merely demonstrated Sachs’ own point with the available data on faculty base rates… so, again, it is not clear to me how Sachs and I are supposedly “at odds.”

But it is clear that Beauchamp misrepresented Sachs’ data. In fact, following Sachs’ confirmation that his data was misrepresented, Beauchamp had to issue a correction for the second source I focused on in my rejoinder as well.

Let’s recap then. In my initial essay I claimed that Beauchamp misrepresented data from the FSP and Sachs. Beauchamp has actually issued corrections for his treatment of both of those studies:

Clarification: An unclear phrasing in the original piece suggested that liberal professors are at higher risk of being dismissed for political views that conservatives. The intent was to say they are more frequently dismissed for their views. A separate phrasing did not reflect the date of the original publication of Ungar’s data analysis, which was in March 2018. This piece has been updated to clarify both points.

Given this reality, it’s not clear exactly what Beauchamp takes himself to be “proving” in his attempted rebuttal. Yet because Beauchamp seems to put a lot of weight in the fact that he actually analyzed three sources, let’s consider the third as well:

Foundation for Individual Rights in Education (FIRE)

More Essays, More Problems

As the preceding sections showed, Beauchamp’s attempted rebuttal did not dislodge any of the core criticisms I offered of his piece. If anything, he dug the hole deeper by adding FIRE into the mix as yet another institution whose data he misrepresented and underutilized (rather than simply ignoring, as I initially suggested).

But there still are a few more issues we have to flag. First, a problem I alluded to in my original essay, but which needs to be rendered more explicit because Beauchamp doubled-down on the error in his attempted rebuttal: he repeatedly claims that the data from Sachs, the FSP and FIRE all seemed to tell “the same story” — “dozens” of incidents. This is a basic statistical error.

In fact, all of these datasets were speaking to different phenomena — disinvitations v. faculty firings v. campus protests etc. – meaning they each tell a different story. Given that the cases in each of the sets are generally non-redundant, they would actually need to be combined (i.e. added) together in order to get on the same page, to actually tell the “same story” about campus incidents in general. So let’s do that:

Between 2016 and the time Beauchamp wrote his essay, the FSP database had come to include 90 campus incidents. There were also 43 incidents from Sachs’ database on faculty firings within this period, and 88 disinvitations from the FIRE dataset. Summing them up we can see that, just from the narrow range of data Beauchamp himself cited, a more accurate description is that there were “hundreds” of free speech incidents, not “dozens,” in the last two years. Had Beauchamp incorporated the full range of publicly-available data from FIRE and HxA, both sources he cited in his original essay, the number would be much higher still.

And although we actually don’t have to speculate about all the incidents that HxA, FSP, FIRE and Sachs fail to capture in order to clearly see that Beauchamp underrepresented campus incidents in his essay(s) – it is worth noting again that, even collectively, these datasets are nowhere near comprehensive in capturing incidents of suppression of speech or ideas. As I flagged in my initial rejoinder:

“All of us are only looking at situations that involve terminations, make the news, or end up in court… We are not able – even collectively – to capture all publicly-available incidents. We will never be able to capture other, likely far more prevalent, incidents of suppression of speech or ideas that do not end up in major media outlets, in courtrooms, etc. As a result, the default assumption should be that the problem is likely worse than the available data suggest (maybe not by much… but also, maybe by a lot).”

II.

In addition to doubling down on previous mistakes, Beauchamp concludes his essay by committing a few new errors that are worth noting: First, he again attributes to me a belief that there is a free speech crisis, despite my repeated assertions — in the very essay he is responding to — that I do not actually hold that view. For instance:

“I do not have any particular investment in whether or not there is a free speech ‘crisis’ (I try to avoid this kind of language myself, as a rule). Nor do I have any stake in whether or not the contemporary cohort of young people (iGen) are profoundly different from previous generations. My only aim here was to debunk low-quality research / analysis on these topics, not to argue for or against any particular position on them.”

I’m not sure how I could have made my own position any clearer. Yet Beauchamp nonetheless structured his entire “rebuttal” as an attempted debunking of a belief that I explicitly do not hold (even calling his attempted rebuttal the “Myth of the Campus Free Speech Crisis” as though that is any kind of refutation of my own position). See: Straw-man fallacy.

Second, he runs together the issues of an alleged “free speech crisis” and “liberal bias” in academia — in an attempt to dismiss both.

Example:

“So why does this all matter? It matters because claims of a campus free speech crisis (al-Gharbi’s piece included) unintentionally bolster a right-wing narrative that the campus is a haven of out-of-control liberalism — and that something dramatic needs to be done to address that. In a vacuum, the notion of promoting ‘viewpoint diversity’ is laudable. But we aren’t operating in a vacuum: We’re operating in a world where Republican legislators are using allegations of a campus free speech crisis and liberal bias among the academy to further efforts to crack down on individual freedom.”

We should definitely separate the issue of the “free speech crisis” from the “liberal bias” in academia. Here’s why:

As I have repeatedly stated in both this essay and the previous one, it is a live debate whether or not there are significant changes underway in terms of how young people view speech — and whether those changes would constitute a “crisis” even if established. Skepticism here is perfectly fine (notwithstanding Beauchamp and Yglesias’ failed attempts at “proving” there is no crisis).

However, the evidence of deep political bias in social research fields is far clearer: It affects how social problems are defined and studied. It affects how committees of professors make decisions in peer review, hiring, promotion, grad school admissions and beyond. It affects which materials are assigned to the curriculum and how they are engaged. It affects the opinions of religious, conservative, rural, low-income and/or minority students about whether the academy has a place for them, or whether they would be better suited elsewhere. It affects how policymakers and the public evaluate the credibility or utility of social research. And it affects virtually all of these things in a negative way.

Heterodox Academy has built a library with a sampling of peer-reviewed empirical studies highlighting this phenomenon and its impacts in various social research fields – a corpus that merely scratches the surface of the available evidence on this question. More has been building every day since HxA burst onto the scene and inspired greater interest in this issue among academic researchers.

Again, this is the problem Heterodox Academy was created to address: homogeneity and insularity within the humanities and social sciences. It is a serious problem which undermines the quality and impact of research and pedagogy. And it is not just a problem for academics: to the extent that good social research is distrusted, or biased and unreliable social research is utilized, this has negative downstream consequences for the populations scholars study and often wish to empower or assist (typically those of low socio-economic status, or those from historically marginalized or disenfranchised groups).

Conclusion: “Heterodox Academy, and why this debate matters at all”

Let me conclude by returning to the theme I led with: in this highly-polarized political moment, it is generally assumed that if someone is pushing back against a popular left-leaning narrative, or espousing an inconvenient view for the left, then they are de facto aligned with the right, intentionally or not. Beauchamp’s rebuttal attempt provides a great example of this fundamentalist thinking: highlighting systemic political bias or threats to free speech on campus will help the right – regardless of one’s intentions –and so, apparently, we should not talk about these issues (except, perhaps, to deny they are a big deal).

I am deeply familiar with this “logic”: as a Muslim scholar who, until recently, worked exclusively on national security and foreign policy issues, it was regularly *suggested* to me that criticism of the “War on Terror” – especially by “people like me” — provided cover or ammunition for al-Qaeda, ISIS and their sympathizers. In the view of these critics (mostly on the right), I was aiding and abetting “the enemy,” intentionally or not.

There was even an article published in the National Security Law Journal which argued that I, and academics like me (by which the author seemed to mean: Muslim, left-leaning, and politically “radical”) should be viewed as enemies of the state — and could legitimately be targeted by national security and law enforcement agencies. This article was eventually retracted, and its author forced to resign from his position at West Point (as described in the Washington Post here). But suffice it to say, I *get* the kind of narrative Beauchamp is trying to spin here, and I reject it whole-cloth.

I challenge U.S. national security and foreign policy precisely to render it more effective, efficient and beneficent – because I actually have “skin in the game” with regards to how the military is deployed. I relentlessly criticize bad research on Trump and his supporters because it is important for the opposition to be clear-eyed and level-headed about why he won – to help ensure it does not happen again. A similar type of motivation undergirds my critique of Beauchamp and Yglesias:

It does not help the left or academics to respond to distortions and exaggerations on the right by denying that there is any significant problem. It is especially damaging for “wonks” or academics to dress up these kinds of political narratives as social research – even more so if this “research” suffers from major errors or shortcomings like the essays criticized here.

Such a strategy is self-defeating because it is the left, those in humanities and social sciences, those from historically marginalized and disenfranchised groups, and those who seek to give voice to these perspectives or to help these populations, who stand to lose the most if the credibility of social research is further eroded due to perceived partisanship.

One brief example from an essay Jonathan Haidt and I wrote for The Atlantic:

  • Most of the major “free speech” blowups have happened at elite private schools (or “public Ivies” like Berkeley) – which are disproportionately attended by upper-income and white students, and disproportionately staffed by faculty who are white and male.
  • Yet, which schools are paying the cost for public dissatisfaction about the state of higher ed (driven in large part by these incidents at elite, private institutions)? Public land-grant schools like University of Arizona (my alma mater): the very schools that are most likely to educate lower-income and minority students, and the very schools that are most likely to have tenured or tenure-track professors that are women and minorities.
  • Within these schools, which programs are first on the chopping block? Humanities and social sciences – the very fields in which women, blacks and Hispanics are most likely to hold professorships, and in which students of color and women are among the most likely to enroll.

Hence, what Heterodox Academy is trying to achieve isn’t something “laudable” in the abstract, for those “operating in a vacuum” (as Beauchamp describes). The reverse is true: some people have the luxury of ignoring or denying the problem because they are not directly grappling with the fallout. Beauchamp graduated from Brown and the LSE (Yglesias, from Harvard). For lack of a better way to say this: It shows.

Yet, if concerned about social justice, it is absolutely essential for those who are part of elite institutions (including those at my current home, Columbia) to understand these dynamics, and to be cognizant of the way their actions can have ramifications for less privileged students and faculty, especially those at less insulated (i.e. virtually all other) colleges and universities.

This is a tough pill to swallow. I get why many on the left, especially at elite universities and media outlets, would rather just say “nothing to see here,” than to confront these realities. But it will not do, for all of us to simply close ranks and insist “there is no problem, we will make no changes.” Because there is a problem — and change is coming to institutions of higher learning, one way or another.

At Heterodox Academy it has always been our hope and expectation that when professors and administrators come to understand the seriousness of the challenge we face, they will rise to the occasion — out of their own commitment to truth and rigor (or self-preservation!) — and correct course while we still have choices regarding how our institutions and practices are best reformed. In order to facilitate these efforts, HxA produces and consolidates research, tools and strategies to help university stakeholders understand and address the lack of viewpoint diversity, mutual understanding and constructive disagreement within institutions of higher learning. Soon, we hope to foster networks within and between research fields (or institutional roles) to further accelerate the reform process.

However, we do all this with an acute awareness that if we fail in our mission — if social researchers cannot restore sufficient faith in our work, and in our academic institutions — then we are likely to see continued declines in enrollments, and even more ham-fisted and harmful legislation of the sort Beauchamp highlighted. In fact, a segment from the previous essay provides a fine note to close out this whole discussion:

“[Dr. Ungar] also posits, regarding the myriad laws being passed to help “fix” institutions of higher learning – often these “cures” seem to be worse than the “disease” they are trying to address. Many in Heterodox Academy share these sentiments. In fact, HxA Research Associate and NYU Law student Nick Philips has argued both of these points in recent publications: campus conservatives must check their own trolling; attempting to legislate away the socio-political tensions within universities is probably a bad idea. FYI: Nick is a conservative. These do not have to (and should not) be partisan issues.”

No doubt, there is an active and cynical campaign by some on the right to reduce the complicated challenges facing universities into wedge political issues. But here’s the thing, we actually don’t have to oblige them (by reflexively adopting a simplistic position, diametrically opposed to theirs). We can steer this in a different direction. And I hope that’s what we do.

Share:

Get HxA In Your Inbox

Hx A June8215of246
Make a Donation

Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.

This site use cookies.

To better improve your site experience, we collect some data. To see what types of information we collect, read our Cookie Policy.