In our recent BBS paper, we argue that the lack of political diversity in scientific psychology sometimes leads to biased research. When ideology is embedded in research questions and measures, it can undermine the validity of that research.
In response to our concerns, some scholars have argued that science is self-correcting, and that political bias is already handled by these corrective processes.
Alice Eagly, a famed attitudes researcher, argued that social psychology is self-correcting:
Liberal, like conservative, psychological scientists are constrained by the shared rules of postpositivist science whereby research methods and findings are public, available for all to scrutinize and critique. When bias is present in research that attracts an audience, the bias is (sooner or later) exposed and then corrected.
John Jost argued similarly in an Edge forum on this issue:
This is because we, as a research community, take seriously the institutionalization of methodological safeguards against experimenter effects and other forms of bias. Any research program that is driven more by ideological axe-grinding than valid insight is doomed to obscurity, because it will not stand up to empirical replication and its flaws will be obvious to scientific peers…
The challenge I see here is that it assumes that we already know what bias is, and how to correct it. In this case, it assumes we already know – or knew, before our BBS paper – what political bias looks like, and have already been correcting it.
However, the nature of bias is itself a domain of discovery. If we haven’t discovered and documented certain forms of bias – and instituted countermeasures for them – we’re unlikely to catch them.
The field currently has no formal account of political or philosophical bias. We’re not trained to detect it. We don’t have ready labels or taxonomies for it. Our approach to cultural bias is still somewhat informal, case by case. (I think it’s useful to treat the field’s political bias as a form of cultural bias.) As a result, leftist ideology is routinely embedded in research.
I’m writing this in October, 2015. This is where we are, right now, as a civilization and as a field. Let’s linger on this.
What is special about October, 2015? What is special about science in October, 2015? What is special about social psychology in October, 2015?
What I ultimately mean by “special” here is what’s special about: 1) our methods, and 2) our knowledge of bias. A core weakness of the “science is self-correcting” argument is that it assumes that we’ve already discovered and implemented the necessary machinery of self-correction by whatever arbitrary date at which the argument is made, like October, 2015.
The only thing that stands out about the time at which people make this argument is that it’s the time in which they happen to be alive. My sense is that we tend to attach primacy and legitimacy to our own era. We’ve placed ourselves atop a linear progression of science, and we assume that science has fully arrived, that every domain labeled “science” in our era is mature and self-correcting. These are strange assumptions to make.
The reality is that social psychology has not been self-correcting when it comes to political bias. We haven’t deployed any countermeasures against political bias in journals, hiring, or graduate admissions. Given what we know about motivated reasoning and the extraordinary force of political partisanship in our era, it’s not clear how we would control political bias in a politically imbalanced field without taking explicit steps to do so.
At this point, it will help to cite some examples of what we mean by political bias. None of the critics have mentioned our examples, even to dispute that they are cases of political bias. This surprised me, and I think it keeps the discussion of “bias” overly abstract and vague.
In our paper, we discussed Feygina, Jost, and Goldsmith (2010), who measured “denial of environmental realities” with items like these:
The earth is like a crowded spaceship with limited room and resources.
Humans will eventually learn enough about how nature works to be able to control it.
Disagreeing with the earth-spaceship analogy was defined as denial of environmental realities, as was agreeing with the vague estimate of future progress. In other words, the researchers treated environmentalist ideological tenets and favored analogies as descriptive facts, subject to either acceptance or “denial”.
Another example we discussed was Son Hing et al. (2007), where the researchers investigated unethical behavior. However, they conflated leftist ideology for ethics, treating a company’s decision to move its operations to Argentina to save costs as an unethical act. Here, rational business decisions and fulfilling one’s fiduciary responsibilities to shareholders was treated as unethical.
How do we catch political biases like these? In one case, leftist ideology was conflated with ethics, and in the other, with reality itself. Social psychology doesn’t have a systematic account of these kids of research biases, or any checks designed to prevent them. P-curves and replication efforts won’t do it. Replicating this research, per Jost, won’t touch the bias – it will just replicate it.
This is ultimately a validity problem, and that is where I see our greatest vulnerability. Social psychologists are very responsive to numerics and statistics. For example, I once told a researcher that the Social Dominance Orientation (SDO) scale is invalid because it’s written as a left-wing caricature of conservative views, and conservatives tend not to endorse the items. He responded “but it’s very reliable.” That’s a good example of where our comfort zone lies. Reliability is expressed as a number, typically (and unwisely ) a Cronbach alpha. Validity is not captured so easily by numerics, and requires careful logical reasoning.
So when Eagly, Jost, or others argue that the field is self-correcting, I’m not sure how they account for ideological biases like the above examples. When they say biases are caught by the field’s normal processes, I don’t see any processes by which political biases are caught. We might think, well, I’m catching them right now, or that we caught them in our BBS paper. Remember, though, that Eagly and others are responding to our paper. They’re arguing against highlighting political bias in the field. Our field is unlikely to be self-correcting if, when we highlight its political biases, the field dismisses them by saying that the field is self-correcting.
 In typical social psychology contexts, McDonald’s omega (hierarchical) is the more valid estimate of the internal reliability of an instrument. Cronbach relies on assumptions we rarely satisfy. See Dunn, Baguley, and Brunsden (2014) for more.