heterodox: the blog
The Self-Censorship Crisis in Higher Ed: How Accurate is the Data? (Part 1)
This blog post is part of our “Heterodox Dialogues” series, which models constructive disagreement among authors who hold opposing or conflicting views on topics. Through an exchange of essays, authors refine their perspectives, find constructive compromises, and offer new solutions.
This dialogue is between John Wilson, HxA Writing Fellow and critic of campus self-censorship research, and two members of Heterodox Academy’s research team, Shelly Zhou and Steven Zhou, who produced HxA’s 2021 Campus Expression Survey.
Zhou and Zhou’s Take
Self-Censorship on Campus Exists, and Surveys Can Help Us Understand Why
Learning from different ideas and opinions is vital for quality education, so recent reports indicating that students may be censoring their ideas and opinions raise concerns about the state of education. However, these reports have not been without debate. This essay responds to an article published by Heterodox Academy writing fellow John K. Wilson critiquing the use of tools like our Campus Expression Survey to assess self-censorship on college campuses; instead, we demonstrate why we still believe that these surveys provide invaluable insight.
Most important, Wilson and our team share the same goals. He argues that colleges ought “to develop a culture where people are both more tolerant and forgiving and more willing to listen and respond to ideas they disagree with.” Wilson continues, “Colleges should educate students about their rights to express their ideas, encourage the expression of controversial beliefs and discourage the idea of ‘reporting’ students with bad beliefs.” These are the values that Heterodox Academy stands for, and our research team hopes these messages are the takeaways from the reports we publish. On this front, we are thankful that Wilson and our team are working toward the same ultimate goal in creating freer, more honest spaces of inquiry on college campuses.
Are Self-Report Surveys Hopeless?
Wilson’s primary disagreement lies in the use of self-report survey data to demonstrate the prevalence of self-censorship. More specifically, he argues that there could be many other explanations for students responding “never” or “frequently” to a question about whether or not they recall self-censoring. In his example, a student might answer “never” but later in an interview setting recall a few examples of self-censoring.
Without getting into the minute details of statistical assumptions surrounding survey-based research (the field of study is called psychometrics, if you’re interested), these “errors” are part of the package when it comes to survey analysis. The reality of survey research is that this type of error always exists: Participants might sometimes answer “never” when they meant to answer “a few,” and they might sometimes answer “frequently” when they meant to answer “sometimes.” All surveys of this kind have this flaw; no survey can claim to perfectly capture a participant’s response without any error.
But the key assumption with survey analysis is that this error is “random.” In other words, sometimes people will answer higher than their “true answer,” and other times people will answer lower than their “true answer.” Across a large enough sample and with enough items, these errors (some positive, some negative) will cancel each other out. Just like any other survey-based research, we make this assumption in our analysis, and we ask over a dozen questions about self-censorship from several different perspectives. The findings were consistent across the board: that most students report some sort of self-censorship. While Wilson is right that our survey is imperfect, his example does not imply that our conclusions are fundamentally incorrect.
Moreover, the example he provided was not a direct quote from our survey. Our survey asked students of their reluctance to discuss various controversial topics. The same assumption applies here: Some students might respond “very reluctant” when they truly are “somewhat reluctant,” while others might respond “very comfortable” when they truly are “somewhat comfortable,” Subjective variables like these are impossible to pinpoint exactly, but with enough items, we can closely estimate by controlling for the random errors that come up throughout a survey. Wilson does quote a different question on our survey, which was a question regarding the degree to which students agree that climate on campus cause self-censorship, but his comment on this question appears to actually support our argument that self-censorship exists: “If a majority of students personally self-censor, everyone should agree that ‘some’ students engage in self-censorship.” We invite Wilson to clarify his position on this question in a follow-up response.
Has Self-Censorship Changed Over Time?
Second, Wilson cites external evidence that self-censorship has increased over the years. Interestingly, he argues that this is a positive shift, because society has changed dramatically to become a lot freer and more diverse since the 1950s, when formal censorship was “at its peak” due to McCarthyism. However, the authors who conducted this study admitted in their paper:
The fear of being socially isolated owing to holding minority viewpoints encourages people to keep their mouths shut. Over time, people test whether their views are acceptable to others; when they find they are not, they shut-up. Self-censorship makes it even less likely that positive reinforcement for minority viewpoints will be encountered by others. Without reinforcement, more people remain quiet, with the consequence that orthodox views are established and ascend into domination. (p. 14)
Wilson’s interpretation of this study’s findings was that increasing diversity in a freer society leads to more self-censorship to avoid social isolation, and that this isn’t a bad thing. We would instead interpret the findings to argue that an increasing fear of social isolation shows that people within society have become less tolerant of diverse beliefs regardless of governmental policies, and this points toward a major need for initiatives that reduce such intolerance and self-censorship due to fear. Interestingly, the same study actually found that people who had higher levels of education reported engaging in more self-censorship. Shouldn’t higher education encourage and train students to engage in less self-censorship and more honest courteous debate and discussion?
Are There Different Kinds of Self-Censorship?
Finally, Wilson makes the point that there may be different kinds of self-censorship. He argues that one possible type of self-censorship “is inevitable and essential to any society”; for example, staying silent in large lecture classes where discussions weren’t encouraged, giving others a chance to speak, or waiting until one is sure of their own facts or opinions before publicly stating them. However, these are not examples of self-censorship based on classic definitions of the construct. Self-censorship is specifically “withholding of one’s opinion around an audience perceived to disagree with that opinion” (Hayes et al., 2018, p. 298). The examples that Wilson provided are more likely examples of social norms, common courtesy, and thoughtful processing, not self-censorship.
Moreover, to further demonstrate that we are examining the “negative kind” of self-censorship that Wilson admits to be problematic, we asked students why they were reluctant to discuss controversial topics. In 2020, 60% of students reported self-censoring because they thought other students would find their views offensive, 31% were worried about others posting critical comments about them on social media, and 23% were even worried that others would file a harassment charge against them. Wilson briefly comments that there’s no evidence that such repercussions would actually happen if a student shares a controversial viewpoint—and he’s partially right; HxA’s upcoming 2021 CES report addresses this point, finding that controversial viewpoints might indeed elicit some negative social consequences but to a much lesser extent than people think. That being said, the fact that students perceive such negative consequences is sufficient cause for concern that the campus climate in higher education has created and sustained such perceptions, even if the consequences are imagined.
To be clear, we do not argue that our survey is perfect nor that it tells the whole story of expression on campus, and we agree with many of Wilson’s concerns regarding the use of self-report surveys to measure a subjective and complicated concept like self-censorship. We certainly advocate for and are working toward better assessments of self-censorship via more objective measures, but self-report does play an important role in such assessments because self-censorship occurs internally. After all, how could you know that someone had a thought and didn’t say it out loud for fear of backlash unless they tell you about it? Thus, we have a large amount of data and number of items coalesce to support our main argument: Self-censorship on college campuses exists, is occurring due to perceived negative interpersonal consequences of speaking one’s mind, and is not simply explained away as social norms or common courtesy.
The Problems with Surveying Self-Censorship
I want to thank the Heterodox Academy research team of Shelly Zhou and Steven Zhou for their thoughtful discussion about my critique of campus self-censorship surveys. Although I appreciate their perspectives, I am still concerned that top-line results from surveys about self-censorship are often used to spin a distorted narrative about political correctness. Pano Kanelos, president of the University of Austin, even cited such surveys as evidence of a terrible crisis that requires a new university. I am much more skeptical than Zhou and Zhou of a survey’s ability to measure self-censorship. I also question whether surveys can help colleges develop useful reforms to address the problem of campus self-censorship.
In their critique, Zhou and Zhou argue that “the key assumption with survey analysis is that this error is ‘random’” in participant responses. I disagree. Beyond any random error, there is an uncertainty inherent in responses to questions about self-censorship because self-censorship can mean many things. On the one hand, people censor themselves for fear of being punished. On the other, we all censor (or edit) our thoughts to be more persuasive, to avoid conflict, and for myriad other reasons. Because self-censorship conflates many different meanings, from the trivial to the serious, we don’t know what students are actually thinking about when they answer these kinds of surveys.
The key findings from the 2021 CES survey were that 60% of students self-censor (reporting a reluctance to discuss at least one of five controversial issues in class) and 63% of students agreed that “the climate on my campus prevents some people from saying things they believe because others might find them offensive.” Students who personally self-censor should realize that “some students” on their campus self-censor. If 60% of students self-censor, then the factual answer about whether some students self-censor on campus should always be yes. And even students who don’t self-censor should be aware of such a pervasive problem among their peers; I suspect the vast majority of students would report that some students suffer sexual harassment on their campus, even if only a small percentage have a personal experience of being harassed. The fact that only 63% see self-censorship on their campus raises the possibility that we don’t understand what students mean in their responses.
This uncertainty is also indicated by another interesting finding from the 2021 CES report: The apolitical students (who answered “haven’t thought much about this” or “none of the above” about their political party) were the most likely to report their own self-censorship about politics and the least likely to perceive student self-censorship on their campus.
Reluctance to speak about politics in class was highest for the apolitical students — “haven’t thought” (60.8%) or “none of the above” (47.4%) — compared with the Republican (39.4%), Independent (43.6%), and Democrat (33.8%) students. Yet agreement with the “climate on my campus” question was lowest for apolitical students — “haven’t thought” (48.4%) and “none of the above” (56.6%) — compared with the Republicans (71.0%), Independents (64.5%), and even the Democrats (62.8%). Apolitical students self-censored the most but reported the lowest levels of self-censorship on their campus.
That’s a very odd result because the students who self-censor should be the ones most aware of self-censorship on campus. This suggests that we don’t really know what students mean by their answers to questions about self-censorship. It’s quite possible that the apolitical students are not actually self-censoring but instead are simply uninterested in controversial issues, and their “reluctance” to speak about them comes more from indifference than repression.
Zhou and Zhou understand the nuances of these issues and the problems involved in surveying self-censorship. However, the problem is that nuance doesn’t make headlines, and few people talk about the careful caveats. Instead, survey research has become a blunt instrument wielded to claim proof of a terrible problem, such as in University of Virginia student Emma Camp’s op-ed in the New York Times, where the author uses the 80% self-censorship figure from FIRE’s 2021 College Pulse survey (which provides a more alarming number than the 60% response in the CES survey).
How Surveys Can Underestimate Self-Censorship
To be clear, I’m not trying to diminish the problem of self-censorship. In fact, surveys may understate the levels of self-censorship. For example, the Student Press Law Center recently reported, “Many of the students who share their stories of self-censorship with SPLC indicate that they didn’t know they were engaged in self-censorship at the time.” Elizabeth Niehaus’ qualitative research suggests that some left-wing students might be less likely to report self-censorship even when they’ve personally experienced it. The biggest problem with self-censorship can be when people don’t realize that they’re self-censoring because it’s so normalized. That’s why I argue that high levels of reported self-censorship may actually be a sign of healthy discourse and viewpoint diversity, because students may be more sensitive about offending peers when they are more aware of different viewpoints. We don’t know if self-censorship in a polarized environment is a normal way to deal with conflict or a serious impediment to dialogue. Nobody self-censors when everybody agrees. Low levels of reported self-censorship, such as during the repression of McCarthyism, are often more alarming, signaling ideological homogeneity.
What Causes and Cures Self-Censorship?
The 2021 CES survey notes that “the most common reason (56%) for students’ reluctance to discuss controversial topics in class was concern that peers would make critical comments to others after class. However, most students reported that they would not engage in such critical actions.” Likewise, while 19.8% of the students surveyed had self-censored in part out of fear that formal disciplinary complaints might be filed against them, only 9.2% of the students said that they might make such complaints. The survey points out, “Negative consequences for speaking out about controversial topics might be more imagined than real.”
But I think this underestimates the power of a small number of people to cause a chilling effect on a much larger population. If only 9.2% of students routinely filed complaints against peers for expressing controversial views in class, it would cause fear and self-censorship. However, I don’t think that’s happening. I’ve seen no widespread evidence of students being disciplined for their political opinions in class.
Likewise, it is quite possible that critical comments from a minority of students (22.5%) might easily cause a majority of students to self-censor. But even if it were possible to convince all students to refrain from “critical comments,” we should not desire it. The fear of discipline or bad grades is a problem colleges can address by protecting students from formal penalties for their views. But colleges can’t restrict critical comments by peers. Self-censorship cannot be the cure for the problem of self-censorship. We need to teach students how to resist self-censorship in the face of criticism, not seek to have colleges free from criticism.
The Self-Censorship of Conservatives
One key issue in the surveys of self-censorship is the alleged suppression of conservatives, which is why a new Florida law requires colleges to conduct surveys of self-censorship and political beliefs. But do self-censorship surveys actually show discrimination against conservatives? We don’t know. There is nothing surprising about higher levels of self-censorship among groups with minority views. Minorities are less likely to feel that they are part of a group of like-minded peers, and they may engage in self-censorship to avoid conflicts even when no one is trying to censor them. FIRE’s surveys show that at conservative-dominated colleges such as Brigham Young University, liberals self-censor at much higher rates.
Because self-censorship is a subjective reality, it doesn’t require the objective existence of discrimination. You can create a reality that conservatives are more likely to self-censor simply by widely reporting that conservatives are the victims of censorship (even if they are not). This can lead to a self-perpetuating cycle where conservatives will react to that perceived fear by engaging in self-censorship at higher levels, which they will then report on surveys, which in turn will be widely reported and cause more conservatives to self-censor. There is no way for surveys to distinguish between legitimate and imagined causes of self-censorship.
Why Values, Not Data, Must Drive the Solutions to Self-Censorship
If students are regularly being punished for their opinions in classes, then we need to know this and try to prevent it. If students have a baseless fear of being punished, then we need to reassure students this is incorrect and help them develop the courage to express their convictions. But most surveys of self-censorship do not help us understand these issues clearly. They do not tell us exactly what students think self-censorship means or the context in which it happens, let alone whether self-censorship is a reasonable response to the situation.
Now, I do believe that colleges should take actions to reduce self-censorship, regardless of its extent or causes or victims. Colleges should fix badly written speech codes with vague restrictions that might cause students to self-censor, as well as make affirmative statements about the importance of free expression on campus. Colleges should fix flawed procedures if protected speech is being investigated and provide opportunities to appeal unfair grading.
And colleges should work to improve the culture on campus by encouraging faculty and students to welcome dissenting views and subject all ideas (including their own) to reasoned critique, and by supporting extracurricular events that promote discourse and debate.
But colleges should take these steps regardless of the extent of student self-censorship reported in surveys. What’s more important than the overall levels of self-censorship is figuring out the causes of and solutions for the most serious forms of self-censorship. Survey research may provide us with some insights about these causes, but the solutions to self-censorship require colleges to make a deeper commitment to intellectual freedom for everyone. To confront the problems of self-censorship, I think we are better off being guided by values rather than by data.
Enjoyed this article? Read part 2.
Get heterodox: the blog delivered to your inbox!
Love this essay? Take it!
All HxA blog content is licensed under a Creative Commons Attribution, Non-Commercial, No-Derivatives 4.0 International License. See our syndication guidelines.
About heterodox: the blog
We want to publish your heterodox perspectives on higher education, academic research, teaching and learning. The HxA Blog is a platform for contributors to advance our mission, share ideas, and model constructive dialogue on higher education relevant issues.
We welcome original submissions from authors across disciplines and from a range of perspectives through our submission form. All published content must follow the HxA Way, a set of values that fosters robust and constructive engagement across lines of difference. Contributors are compensated for accepted pieces.
Interested in contributing? Please see our submission guidelines. We encourage readers to follow us on Facebook, Twitter, Instagram, and LinkedIn — and to join in the conversation on those forums — to weigh in on this or other posts.