The Self-Censorship Crisis in Higher Ed: How Accurate is the Data? (Part 2)
Zhou and Zhou’s Take Let’s Not Throw Out the Baby with the Bathwater
We are pleased to have this opportunity to engage with John K. Wilson on such a meaningful level while exploring the strengths and weaknesses of survey-based research on self-censorship. In his most recent response, Wilson brings up interesting ideas in reply to our defense of self-censorship surveys, which we are eager to dialogue with in this piece. We respond to his thoughts in order of their presentation.
“Error” in Surveys
Wilson disagrees with our presentation of the assumption of randomness in survey data (in other words, the assumption that there is random error in survey responses that cancels out over time). We want to first clarify that his claim here actually conflates two different topics: random error in survey data and the definition of self-censorship. The assumption of randomness in survey data is foundational to survey-based research. Wilson’s response is not disagreeing with this assumption.
Instead, Wilson’s concern is that beyond random error, survey data might show bias because of differing interpretations of the definition of self-censorship. This concern is different from the assumption of randomness that we described in our previous essay. We address this concern below.
Self-Censorship Definition
Wilson’s main concern appears to be in the definition of self-censorship. Self-censorship is specifically “withholding of one’s opinion around an audience perceived to disagree with that opinion” (Hayes et al., 2018, p. 298). This definition of self-censorship goes beyond a general reticence to speak one’s opinion; the key distinction is the reluctance to do so in front of an audience that seems to disagree with one’s opinion. By specifying this social context, the definition heavily implies that the reason one would self-censor is to avoid disagreement and judgment from others. Therefore, self-censorship involves two parts: (1) the act of withholding one’s opinions and (2) the motivation of avoiding disagreement.
Addressing the first component of self-censorship, the Campus Expression Survey (CES) asks about withholding one’s opinions on a certain topic in a particular setting. It specifically asks, “Think about being at your school in a class that was discussing [INSERT TOPIC]. How comfortable or reluctant would you feel about speaking up and giving your views on this topic?” It does not simply ask students if they “self-censor” and, in fact, avoids using this word. The specificity of this question does not leave much room for interpretation about the first part of self-censorship: the act of withholding one’s opinion.
The second part of self-censorship, the reason for withholding one’s opinion, seems to be Wilson’s primary concern. He argues in his first essay that students might self-censor for different reasons — some good (e.g., to allow others an opportunity to talk or because one realizes their opinion is unsound upon thinking through it) and some bad (e.g., to avoid criticism, conflict, or punishment). We responded by arguing that the former are examples of social norms, common courtesy, and thoughtful processing and do not meet the second part of this self-censorship definition because they are not specific to a disagreeing audience.
The CES addresses the second part of this definition by asking respondents who reported that they withhold their opinions about their reasons for doing so: “If you were to speak your opinions about one or more of these controversial issues during a class discussion, would you be concerned that each of the following would occur?”
Most students — 61.4% in 2019 and 60.3% in 2020 — indicated they would be concerned that “other students would criticize my views as offensive.” This is not “positive” self-censorship (i.e., meant to be more persuasive or to respect others’ diverse viewpoints), as Wilson argues. Moreover, writing in “other” reasons why they might self-censor, only 4.5% of students in 2020 responded with positive motivations for their self-censorship. Given these responses, we argue that people are reporting on the negative form of self-censorship, one that is intended to avoid disagreement (Hayes et al., 2018) and its accompanying negative social consequences. Wilson agrees this negative form of self-censorship requires interventions to ensure students can honestly express their views and perspectives in the face of criticism, and to do so respectfully and effectively.
Some Specific Numbers
Wilson identifies two specific numbers from the 2021 report that he found confusing:
First, he points out that while 60% of students reported feeling reluctant to discuss a controversial topic, only 63% of students agreed that “the climate on my campus prevents some people from saying things they believe because others might find them offensive.” He argued that this does not make sense, because “students who personally self-censor should realize that ‘some students’ on their campus self-censor.”
We submit that this is exactly what is happening. A little over half of students report feeling reluctant to discuss at least one controversial topic (some many more), which contributes likewise to a little over half of students feeling like the campus climate is a cause behind such reluctance. Wilson’s argument that other students should “see others self-censoring” and thus respond with more agreement to the question on campus climate ignores the fact that self-censorship is not visible to others (i.e., it is, by definition, a lack of an action with a specific motivation).
Second, Wilson points out that apolitical students showed the highest levels of reluctance to discuss controversial topics, but the lowest levels of agreement with the statement “The climate on my campus prevents some people from saying things they believe because others might find them offensive.”
We remind readers that results from small subgroups, like this apolitical subsample, are unreliable. Specifically, 97 respondents categorized themselves as apolitical by endorsing the response “Haven’t thought much about [politics]”), compared with 671 in the Democrat, 358 in the Independent, and 241 in the Republican groups. We did not include the “apolitical” subgroup in the public report for that very reason: Small sample sizes produce results that are more likely due to random chance. Wilson may be right that apolitical students may be reluctant to discuss politics because they are uncertain about their political views, instead of negative self-censorship, but this small sample of students is insufficient to empirically support this proposition.
Self-Censorship and Healthy Discourse
Wilson also argues that when students realize they are self-censoring, this realization should be understood as a positive outcome because it shows that students are more aware that diverse perspectives exist and are therefore more sensitive when engaging with others who hold diverging views.
We would love for this to be true. If students were reluctant to discuss controversial topics out of respect for others’ different views, that would be a major step toward open inquiry and respectful disagreement. However, the data we collected on why students are reluctant to speak their opinions indicates that they feel this way because they fear punishment, mostly from peers, instead of being sensitive to their classmates’ differing opinions. If other research has found evidence of the latter in a large representative sample of college students, we would love to hear about it. Without evidence to support it, we need to point out that this statement by Wilson does something that he has criticized us for doing: presuming that participants are internally defining self-censorship in a particular way.
The Actions of a Few
Finally, Wilson is right to note the “power of a small number of people to cause a chilling effect on a much larger population.” He further notes that, if a small number of people are criticizing others behind their backs and this criticism is the root cause behind self-censorship, then we should not be trying to eliminate such criticism but rather teach students to withstand it.
We wholeheartedly agree. Based on the most recent 2021 report (page 7), it does seem like the critical actions of a few students are creating a campus climate of fear that leads to many students’ reluctance to discuss controversial topics. We do not argue that we should prevent this criticism, though. In fact, we should work toward “establishing a classroom climate that encourages constructive disagreement and viewpoint diversity.”
As Wilson notes several times, we are on the same team here, working toward healthy discourse. Wilson disagrees with the approach that we have taken — using survey data to illustrate the need for healthy discourse — but we are aligned on the end goals.
Communication of Survey Findings
At this point, we turn to an overall thread weaved throughout Wilson’s response. Wilson is concerned with how survey data might be misused. He writes, “[The Heterodox Academy research team members] understand the nuances of these issues and the problems involved in surveying self-censorship. However, the problem is that nuance doesn’t make headlines, and few people talk about the careful caveats.” He provides examples of Emma Camp’s New York Times op-ed and a new Florida law as evidence of misuse of survey data on self-censorship.
We are just as concerned as Wilson is over how data is used. We fully agree that there are many limitations with survey data and many nuances that a short public-facing report isn’t able to communicate, which could lead to misinterpretations. There is an entire field of study on the ethics of data visualization, with scholars investigating the best approach to communicate data in an effective, ethical, and accurate manner.
These concerns shouldn’t stop us from trying. Yes, data can be misused, and there’s probably no way to be 100% confident that we can prevent misuse of data — after all, we can’t (and shouldn’t) control what others choose to do with data. Heterodox Academy is committed to the pursuit of truth, and we strongly believe in the importance of carefully reporting as much data as possible to enable others in their own search for truth. That’s why even our short public-facing reports include footnotes to address some of the nuances and caveats that we have been discussing. Moreover, we post fully transparent details of our methodology, raw data, and even analysis code to allow others to replicate our findings and investigate for themselves. It’s also why we’re engaging in this important dialogue, so readers can see the strengths and weaknesses of this data and evaluate it for themselves.
Values and Data
Wilson ends by arguing that we should be “guided by values rather than by data” in our shared end goal of improving campus climate and pursuing open inquiry. He even says, “Colleges should take these steps [toward encouraging diverse views] regardless of the extent of student self-censorship reported in surveys.”
We must strongly disagree with this point. In fact, if our surveys had found that very few students reported “reluctance to discuss controversial topics,” then we would report that developing programs and initiatives to specifically address student self-censorship is not worth the time or money — regardless of how strongly we value it.
Decisions should be made based on values and data. No data set is perfect: Our Campus Expression Survey data certainly isn’t, but it offers a clue toward what is happening at a broad level on campuses across the United States. Equally so, if we were to rely entirely on personal values or even the occasional interview or anecdotal story, we could be getting a highly biased perspective. We must rely on large representative samples to give a more accurate view of the bigger picture, even if flawed.
Survey data is indeed flawed, and some people might misuse it, but we cannot throw the baby out with the bathwater, so to speak. The Campus Expression Survey is not meant to be the ground truth on which everyone else must make their decisions. Interviews, personal values, and debates like these are instrumental to adding depth and nuance to survey data.
Despite its limitations, the Campus Expression Survey and other survey research are still an important step forward in our collective pursuit of the “true story” of what is happening in college campus discourse. If we were to ignore all survey data, we could end up sinking thousands of hours of research and programming into something that impacts only a small percentage of the population.
Wilson’s Take Questioning the Crisis Mentality Toward Campus Self-Censorship
Zhou and Zhou’s discussion about the definition of self-censorship exposes one of the core issues with campus expression surveys: These surveys ask about the respondents’ “reluctance” to speak and then frame answers as a form of self-censorship. Reluctance to speak is a perfectly normal situation, and many students likely do not equate reluctance to speak with self-censorship.
This disconnect between a personal “reluctance to speak” and the perception of self-censorship as something “some students” experience shows that we should be reluctant to equate the two. The response of the apolitical students (who are the most reluctant to discuss politics in class but least likely to say that some students self-censor) is a case in point, yet Zhou and Zhou dismiss it because the sample size is “insufficient.” However, the same effect I note applies to the “none of the above” students, and this effect is found in both the 2020 and 2021 surveys for all kinds of apolitical students (a total of 327 respondents over two years). That suggests random error from a small sample is highly unlikely to be the cause.
Instead, I think we need to confront the fact that apolitical students are the most reluctant to discuss politics in class and the least likely to say that students self-censor. The meaning of this fact is complicated, but it does suggest a divergence between a personal reluctance to speak and the more ideologically driven view that students self-censor. And that indicates that students’ reluctance to speak in class is about much more than ideological intimidation.
According to Zhou and Zhou, “If students were reluctant to discuss controversial topics out of respect for others’ different views, that would be a major step toward open inquiry and respectful disagreement.” But what is the difference between “respect for others” and concern that “other students would criticize my views as offensive”? Why isn’t a desire to avoid being offensive just another form of showing “respect for others”? What if respect rather than fear is the primary driving force for this answer?
Zhou and Zhou argue that the reasons for self-censorship revealed that “people are reporting on the negative form of self-censorship” and fear of “negative social consequences.” However, all of the possible options given to students to explain their reluctance to speak in class are “negative” reasons based on fear of criticism or punishment, rather than a normal reluctance to speak in class about controversial issues. If the survey offered “out of respect for others’ different views” as an option, I suspect it would get strong support. When there are no positive options for explaining reluctance to speak, it creates a serious bias in the survey.
The Normal Practice of Self-Censorship
Sometimes self-censorship, even to avoid offending others, is not so terrible. If a student believes that transgender people are doomed to burn in everlasting hell, I have no doubt that they might report feeling reluctant to express that belief at many colleges due to the fear of offending others. We should keep this in mind if, in the classroom and even in public debates, we want to maximize the expression of thoughtful, intelligent ideas. One way we do that is by praising good ideas and discouraging unproductive ideas. While the authorities must not punish bad ideas, professors and classmates are under no obligation to treat all ideas as equally valid. A culture of relativism, even if it were possible, would not be desirable.
When I went to college, it would have been very difficult for a student to feel comfortable enough to speak out in defense of transgender rights (and it still is today at more-conservative colleges). Unpopular views are always self-censored, and always have been. While colleges (and all of us) need to work on ways to help protect unpopular ideas, surveys don’t tell the full story about whether campus discourse is being suppressed.
My point about these surveys is that when they reveal self-censorship, we have no way to judge the quality of ideas being treated in a negative way. When the survey questions ask about specific fears of official punishment, they can reveal a worrisome form of self-censorship that needs to be addressed (by protecting speech and informing students of the reality that they will not be punished).
But the existence of self-censorship, and the fear of criticism by others, is such a normal part of intellectual exchange that surveys cannot tell us if there is a serious problem or merely the illusion of one. That’s why I argue that solutions to the cultural problem of self-censorship need to be driven by values, not data. The data is simply not clear enough to guide us toward any particular reforms, nor can it tell us the extent of the true problem.
How Beneficial Are Self-Censorship Surveys?
Zhou and Zhou argue that we need surveys to prevent wasting “thousands of hours of research and programming” without benefiting anyone substantially. But what self-censorship survey has ever pointed the way toward any effective solutions to the problem? What specific campus reform has ever been adopted to reduce self-censorship because survey data pointed the way? What if the surveys themselves are a waste of vast resources rather than a data-driven solution to avoid self-censorship?
While I see the value of certain information in self-censorship surveys, I think we need far more skepticism about what these surveys are actually identifying. There has been a proliferation of these surveys, often driven by Republican legislators in states such as Florida and Indiana who seem to push them to create a narrative for attacking academia. We need to gather good data about the enormously complex social problem of self-censorship and then be careful to question what that data actually means. Until self-censorship surveys can show a useful purpose for guiding solutions rather than promoting alarmist headlines, I think we need to focus on our values to develop ways to reduce self-censorship.
Related Articles
Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.