Can We Protect Human Subjects and Intellectual Freedom?
Years after my father retired from being a draftsman for a defense contractor on Long Island, he was able to tell us he had been working on safety systems for nuclear subs. He didn’t provide details, of course, but he did say the experience had taught him one thing: there is no such thing as a safe nuclear sub.
In retrospect, the stress of being charged with keeping people safe in a fundamentally unsafe context might explain his reaction when my sister and I decided to throw a New Years’ Eve party and to tuck a little glitter into each invitation. Overhearing our plans, Dad exclaimed with visible worry, “What if someone opens the mail over a baby?!”
Years later, as an academic researcher, I started having to deal with Institutional Review Boards (IRBs), and as I did so, I kept thinking of my father’s worry: What if someone opens the mail over a baby?
The line came to mind because my dad’s obscure worry exemplified the thinking of an unfortunately common breed of IRB regulator: that you have to consider the worst possible thing that could ever happen in a study, no matter how unlikely—no matter how comically unlikely—and force researchers to plan for that.
There’s no doubt that IRBs have an extraordinarily important role to play in inquiry: they are tasked with ensuring that human subjects of research are fully informed and that the risk to them is minimized. In some cases of biomedical research, IRB members are tasked with the job my dad had: keeping people safe in a fundamentally unsafe context. It’s difficult to prove a counterfactual, but it’s surely the case that IRBs have prevented cases of bad “consent” and unnecessary harm.
Nevertheless, as well-meaning IRB members find themselves caught in vortices of regulation and liability fears, they often end up requiring consent forms so long, cold, and complex that no ordinary human actually reads them before signing, undermining the alleged point of the process. Additionally, in forms of research where the subject is typically not physically touched and the risk is low, like some types of sociology and psychology, IRBs too often overcomply and overreach.
By this point, IRBs have come to be seen as gatekeeping bridge trolls by many of the researchers whose work they delay and deform. Troublingly but understandably, researchers have come to see dealing with consent as a frustrating, occasionally capricious rite of passage instead of a means of reflection on how to maximize subjects’ well-being.
How did this come to be? Bioethicists like to tell a self-aggrandizing story of how IRBs grew out of deeply felt concerns for human research subjects. But in her excellent 2012 book, Behind Closed Doors: IRBs and the Making of Ethical Research (University of Chicago Press), Laura Stark concluded otherwise: “It is more historically accurate…to say that the invention of bioethics as a profession and the invention of expert review are two parallel stories with one common cause: medical researchers’ concerns over their legal liability in clinical studies and clinical care.”
In short, the National Institutes of Health (NIH) was worried about being sued and consequently established the system researchers of human subjects must contend with today. But a system built on liability risk management is not a system terribly interested in intellectual freedom. Such a system attends to the administration’s and the institution’s financial and legal needs first, with subjects’ and researchers’ needs standing subordinate.
As if we needed more proof that IRBs are about protecting their institutions first and human subjects second (or third, or fourth), those swept into the regulatory process soon notice that what does seem to lower the IRB bar in many cases is money; the bigger the grants, the better oiled the process. I documented this with an investigation of one particularly appalling case of a prenatal drug “study” run for thirty years without meaningful data collection, but plenty of other cases of protections-failure exist. In these cases, IRBs protected their institutions’ interests, not those of the patients who became subjects to dangerous and in some cases deadly experiments.
In her book, Stark was able to document how IRBs really work by interviewing and analyzing the processes of three anonymized university IRBs. Her efforts demonstrated that, while IRB members attempt to put forth a unified front that makes their decisions look objective and rational, the reality is that their work is idiosyncratic, reflective of who happens to be serving that year. This means that what you contend with when you go through an IRB depends not only on which institution’s IRB you have to work with, but who is on the board at that moment.
Among the many problems researchers face is that IRBs typically treat all subjects as generic and all regulatory traditions as inviolable, sometimes refusing to allow, for example, the naming of subjects even when they want to be named. This is the topic of an article in this month’s issue of inquisitive, the new, hopefully thought-provoking periodical from Heterodox Academy, as University of Chicago neurobiologist Peggy Mason relates her own experiences working with IRBs and her frustration over not being allowed to appropriately recognize and honor the neurologically-one-of-a-kind subjects she studies.
In her essay, Mason nails the way that, among other significant problems, IRB members typically lack a disability consciousness, and so they treat people with illnesses and disabilities as if they are incompetent children—a form of overcompliance. This is ironic at best, because the IRB-required consent forms assume that the persons signing are mature, well-educated, and savvy.
To be clear, the cost of the current IRB system is not just frustration for subjects and researchers. Besides often not ensuring true informed consent, IRBs also sometimes unreasonably delay, deny, or deform research, ultimately at the cost of new knowledge that could help subjects.
Usually, when we talk about constraints on open inquiry, we find ourselves talking about ideological siloing, disciplinary tribalism, and the like. But it’s worth recognizing other forms intellectual constraint may take, including overcompliance, governmental overreach, and regulations that are allegedly about the protection of the vulnerable but seem to be more about building liability shields.
View the entire December 2024 issue of inquisitive here.
Related Articles
Your generosity supports our non-partisan efforts to advance the principles of open inquiry, viewpoint diversity, and constructive disagreement to improve higher education and academic research.