Photo: Mike Nayna
Last fall, it was revealed that a trio of researchers — the philosopher Peter Boghossian, the mathematician James Lindsay, and the medieval-studies independent scholar Helen Pluckrose — had perpetrated what they viewed as a spiritual successor to the infamous 1996 Sokal hoax, in which the NYU physics professor Alan Sokal had a jargon-laden nonsense article accepted and published by a prestigious humanities journal.The trio sent out a bunch of ridiculous articles to a number of journals within “grievance studies” fields, or areas of academia ostensibly concerned with drumming up grievances on behalf of the supposedly powerless against the supposedly powerful. (Disclosure: I appeared on a podcast co-hosted by Pluckrose.)
Of the 20 articles the trio submitted, seven were accepted (after various revision and editing processes, as is the norm with papers accepted by academic journals). One was a feminist-oriented revision of a Mein Kampf excerpt; another, ostensibly based on hours of observation, argued that “dog parks are rape-condoning spaces and a place of rampant canine rape culture and systemic oppression against ‘the oppressed dog’ through which human attitudes to both problems can be measured” — that one won recognition for excellence by the feminist geography journal in which it was published, Gender, Place, and Culture. (That was also the paper that blew the hoaxsters’ cover prematurely — the Wall Street Journal’s Jillian Melchior saw it, grew suspicious, and ended up breaking the story). The general success the trio had in getting these papers submitted, the perpetrators argued in their summary of the affair in Areo, which Pluckrose edits, “shows that there are excellent reasons to doubt the rigor of some of the scholarship within the fields of identity studies that we have called ‘grievance studies.’”
When the story broke, it sparked a massive controversy that seemed to reinforce most observers’ ideological tendencies. Some, particularly those skeptical of “grievance studies” or “postmodernism” or “SJWs” or whatever else, viewed it as the latest in a long line of emperor-has-no-clothes moments within certain areas of scholarship, dating back to Sokal’s article. It was also seen as a worthy sequel to Boghossian and Lindsay’s more modest “conceptual penis” hoax from 2017. Others, particularly those who view such claims as thinly veiled excuses to attack leftist scholarship concerned with the plight of marginalized people, highlighted the fact that Boghossian’s team got plenty of rejections, that their “grandiose conclusions overstate the project’s scope and the extent of its success” (as Slate’s Daniel Engber put it), and that there was a mean-spirited tone to the whole thing, which, they argued, stemmed more from ideological hostility to gender studies and associated fields than good-faith critique.
The Chronicle of Higher Education ran a mini forum which showcased a variety of different views on the subject. “The entire force of their stunt lies in the fact that they managed to get several satirical papers published,” wrote the University of Washington biologist Carl T. Bergstrom. “But it makes no sense to judge the health of a field by looking at what an insincere author can get through peer review.” On the other side was Yascha Mounk, a Harvard lecturer in government, who condemned the circling of the academic wagons and what he viewed as unfair attempts to undermine the hoaxsters. “[E]ven if all of the charges laid at the feet of Lindsay, Pluckrose, and Boghossian were true, they would have demonstrated a very worrying fact,” he wrote. “Some of the leading journals in areas like gender studies have failed to distinguish between real scholarship and intellectually vacuous as well as morally troubling bullshit.”
And that was that, for a while — the hoax continued to percolate online, kindling the academic culture wars, but it had generally faded from public view until earlier this week. That was when a startling-sounding revelation breathed new life into the controversy and instigated a fresh firestorm on social media: Portland State University appears poised to sanction Boghossian, an assistant professor there, for research misconduct as a result of the hoax. Boghossian, in his university’s view, failed to get Institutional Review Board (IRB) approval for his research and fabricated data when he and his team claimed, for their dog park article — I’ll, erm, defer to the language from their Areo write-up — “to have tactfully inspected the genitals of slightly fewer than 10,000 dogs whilst interrogating owners as to their sexuality.” (Boghossian has publicly posted the documents he got from PSU, which lay out the charges in detail, here.)
In a YouTube video he published last Saturday, Boghossian, appearing in a bathrobe in the first segment for some reason, reads out one of the emails PSU sent him, threatening an investigation on IRB grounds. “I think that they will do everything and anything in their power to get me out,” he says. “And I think this is the first shot in that.” A conference call with Pluckrose and Lindsay ensues, and Pluckrose, being filmed elsewhere, explains, “They can’t say that we needed IRB approval for the ones that we published in journals, because there weren’t any real human subjects. So they will have to say that the IRB approval we needed was treating [journal] reviewers as human subjects without passing that through an ethical board. Hopefully, we can say that there wasn’t any way to get informed consent for that — it just isn’t a possibility with this kind of thing.”
One can forgive Boghossian for feeling as though his university is out to get him. Yes, his hoax was intentionally provocative, and yes, he and his colleagues said a lot of mean things about other academics in the magazine article and YouTube videos connected to their project. But certain aspects of the backlash to the grievance-studies hoax come across as over the top, as well. Perhaps most notably, a group of 11 PSU professors and one grad student published an anonymous letter in the Vanguard student newspaper, accompanied by a weird and menacing-looking image of Boghossian with a Pinocchio nose, in which they accused him of various misdeeds both related to the hoax — “When supposed scholars repeatedly engage in fraudulent behavior violating acceptable norms of research in any discipline, we have to start asking what the purpose is” — and separate from it, like inviting James Damore to an event at PSU. Strangest of all, perhaps, was the stated justification for anonymity: “We have opted to communicate our concerns through a collective identity rather than individually,” explains the final section of the article. “Boghossian has not only indicated his less-than-collegial attitude through his hoaxes, but has actively targeted faculty at other institutions. None of us wish to contend with threats of death and assault from online trolls.” The linked-to website contains no evidence that Boghossian has “targeted” anyone (his name doesn’t even appear); nor is the rather astonishing claim that to respond to Boghossian under one’s own name might mean risking “assault” defended anywhere. It’s simply unusual for academics to so vehemently attack one of their own while revealing their titles but not their names, but it seemed to neatly capture the tenor of this debate at PSU, a very left-leaning campus.
The question of whether Boghossian was the target of unfair heat from his academic community, however, is different from the question of whether the present investigation is unfair. So: Is it? Are administrators at PSU railroading one of their own professors for daring to stand up to “SJW” groupthink? Or is this a standard ethics investigation? The short answer is no. When you cut through the noise of social media and the raging culture war in which this incident is enmeshed, and focus instead on how universities tends to handle this sort of thing, PSU’s investigation, on its own, actually offers very little evidence of a witch hunt or unfair treatment of Boghossian. The long answer is a bit more complicated, and ties into a broader controversy within academia.
It’s impossible to understand this case without understanding a little bit about IRBs themselves. The IRB system is, at root, an arrangement between research institutions — both universities and others — and the federal government. To receive federal research funds, which are the lifeblood of many research bodies, those bodies must agree, in return, to follow certain ethics procedures, particularly with regard to any research involving human subjects. That’s what IRBs do. Every research institution that receives federal funds has its own IRB, and most insist that any employee conducting research at that university have their research plan cleared by the IRB beforehand.
It might sound simple, but it’s endlessly complicated. IRB protocols used for social-science research grew out of those used for biomedical research, and there are key differences between the ethical implications of, say, obtaining informed consent to give someone an experimental medication and performing psychological experiments that might cause temporary embarrassment among its subjects. There’s a cacophonous discussion going on about whether and to what extent IRB protocols are fair, and it’s fueled by the regular emergence of over-the-top stories of IRB overreach. Some scholars, like the George Mason University professor Zachary Schrag, have dedicated a great deal of time to surfacing these stories and calling for reforms to the system — he’s the author of Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009 and the Institutional Review Blog.
One of the most common critique of IRBs is that they are far too risk averse when it comes to their conceptualization of potential harm to human subjects, and that they introduce unnecessary bureaucratic delays that stifle research and researcher ingenuity. In 2017, Scott Alexander, a psychiatrist and popular blogger, wrote an instant-classic account about his misadventures with a hospital IRB when he wanted to do a study to see if a commonly used questionnaire that supposedly tests for bipolar disorder actually works as advertised. To take one of many examples of the weird demands levied on him as he tried to set up the study, Alexander’s IRB insisted that he include, when administering the questionnaire, a paragraph about the possible “risks” of doing so — even though there were none, even though asking patients the sorts of questions he planned on asking for his study “is the sort of thing that goes on every day in a psychiatric hospital,” and even though telling a patient at a psychiatric hospital you are about to do something that could harm them could, for obvious reasons, cause problems (especially when that’s a false claim anyway).
Schrag and other advocates concerned with this issue have collected countless other examples. Soul-melting IRB experiences — “It took over 5 months & 9 submissions … to have our recruitment materials displayed at all participating colleges. 5 months of ethics review in order to do interviews w/15-20 adults on an uncontroversial subject” — are not unusual among researchers, though there’s significant variation between different IRBs. Now, things are set to get a little better — revised federal IRB rules, the result of an eight-year process to shore up the system kicked off by the Obama administration, are set to go into effect later this month and will expand the types of social-scientific experiment that are exempt from IRB review. But Schrag said “the revisions ended up not being particularly bold, and nothing in them would exempt audit studies requiring deception” like the one the grievance studies hoaxsters pulled. (I’ll explain why the audit-studies angle is important in a bit.)
For the purposes of Peter Boghossian’s case, three facts about IRBs matter a great deal: “study” is defined rather broadly in the federal guidelines; possible risks to humans — even ones that non-IRB nerds may view as negligible — are taken very seriously; and IRBs tend to look especially closely at studies involving deception. For these and other reasons, each of the four IRB experts I spoke or emailed with agreed that yes, the grievance-studies hoax needed IRB approval; yes, it clearly involved human subjects; and no, PSU’s decision to investigate it on that front cannot be reasonably viewed, on its own, as politically motivated. In other words: This particular aspect of the university’s response smells more like a standard reaction to improperly vetted research than a witch hunt.
First, the definition of “study”: As PSU explained to Boghossian in a document it sent him December 17, the university determined that his work met the definition of “study” as defined by the Department of Health and Human Services in language that reads — this isn’t included in the letter itself — “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” All the IRB experts I communicated with said they agreed that the grievance-studies hoax qualified as a result of its basic structure, and some pointed to the language the hoaxsters used themselves — calling it a “study” in their Areo article or “a satirical scholarly audit” in a press release published earlier this week — as extra proof of that.
Laura Stark, a historian at Vanderbilt and the author of Behind Closed Doors: IRBs and the Making of Ethical Research, further explained in an email that “it was human-subjects research according to federal regulation … The human subjects being the journal editors.”
Crucially, it does not matter that the hoaxsters didn’t attempt to publish their final results in a peer-reviewed journal. “Publishing in a magazine that’s not peer reviewed doesn’t matter if they’re reporting on their research,” said Celia Fisher, director of the Fordham University Center for Ethics Education. All that matters is that Boghossian is an employee at PSU, and that he conducted what the university deemed to be human-subjects research based on a plain reading of how that term is normally defined for this purpose.
Now, in the video Boghossian released earlier this week, Helen Pluckrose expresses some skepticism that what they were doing was really human-studies research, but again, all the experts I spoke with disagreed. First of all, “if they believed that this was not human-subjects research, the process would have been to submit to the IRB an application that says I believe this research that I’m doing is exempt,” said Fisher. “And the IRB makes the determination if it’s not human-subject.” It’s not their call, in other words — it’s still the IRB’s.
But had Boghossian sought an exemption, he likely would have been rebuffed, anyway. IRBs tend to be very, very conservative about any experiments that could have any impact on human beings. It’s an area where academia operates according to norms quite different from those of other fields. Journalists, for example, would have no compunctions about exposing shoddy or unethical work on the part of a local business. IRBs tend to be extremely careful about this sort of thing — the assumption is usually that any small possibility of harm needs to be justified and, if possible, protected against. In a case where an experimental subject could be exposed to negative public scrutiny, an IRB will go to great lengths to ensure their anonymity, even if that means withholding or blurring certain details from the study write-up. (IRB strictness about harm and anonymity can explain why anyone who reads a lot of psych studies will be familiar with researchers explaining that a study took place not at “Boston University” but at a “large northeastern university,” and so forth.)
In the case of the grievance-studies hoax, the potential for harm came in the form of reputational damage and humiliation to journal editors and reviewers. And one decision the hoaxsters made — allowing accepted papers to actually be published rather than notifying the journals so they could be yanked before they were out in the world — neatly captures the sorts of ethical discussions often spurred by IRBs. “That’s what happens at the IRB,” said Elisa Hurley, executive director at Public Responsibility in Medicine and Research, an organization which offers ethics guidance to researchers. “The IRB’s job is essentially to facilitate research that is consistent with the regulations or that does in fact protect and respect the research subject involved.” So had Boghossian and his colleagues sought IRB approval, there’s a good chance that one condition for obtaining that approval would have been something like, If any of these papers are accepted, you can’t let them actually be published — you have to notify the journal immediately. As things turned out, the journals in question were identified by the hoaxsters, which did open up the editors to public scrutiny, a form of “harm” from the point of view of an IRB. Unnecessary harm, too, since an acceptance note from a journal is sufficient information to deem a study as having been accepted — it doesn’t need to actually appear in the journal.
Again, it doesn’t matter whether you or most of the rest of society believe this to be an overly sensitive kid-glove approach. This is the well-established, risk-averse way in which IRBs do business, and it isn’t a mystery to anyone who has dealt with one. If it’s too conservative a stance, that’s a problem with the system — not with PSU’s administration of the existing rules.
It’s worth noting that Stark dissented slightly from the other IRB experts I consulted on one front: She thought that while the decision to investigate Boghossian’s lack of IRB approval was perfectly legitimate and not, on its own, fishy, it was also possible that it was influenced by politics. “Universities and IRBs definitely make decisions about what issues to pursue and how aggressively to pursue them depending on the political climate on campus,” she explained in an email, “which is part of a broader national and international context.” In her view, it’s possible the investigation was both political and legitimate on the merits. Fisher, on the other hand, didn’t see PSU as having quite so much choice in the matter, arguing that the university “is in a very tough position, because not to pursue this would be very damaging in terms of other studies that should be studied for IRB review, and for other people who end up publishing fabricated data. It’s a very difficult precedent for a university to set that it’s okay to publish fabricated data, even if this is in a context in which, if it had gone through an IRB and modified in some ways,” the design might have been ultimately accepted.
Finally, as a so-called audit study — that is, one designed to test how different real-world institutions respond to the same or similar inputs (in this case, research articles, but in more famous cases, things like résumés randomized to have black- or white-sounding names) — Boghossian, Lindsay, and Pluckrose’s scheme was more likely than a non-deception study to raise IRB eyebrows. “All audit studies require deception, and many social scientists who would like to see only a limited role for IRBs still consider IRB review appropriate when deception is required,” said Schrag. Deception is simply seen as an ethically fraught tactic, so even IRB critics aren’t necessarily in favor of them backing off entirely in situations where deception is involved.
This is yet another reason why the researchers probably should have realized their study required IRB approval, or an exemption. And yet the video Boghossian published seems to show them not understanding this. Pluckrose at one point mentions the impossibility of getting informed consent from journal reviewers — the implication being that to do so would be to blow the cover of the experiment. Again, though, that’s the point of an IRB: to gain permission to deceive, or to come up with some sort of work-around. The choice isn’t necessarily between obtaining informed consent in a manner that would blow the experiment and not running the experiment at all — plenty of IRBs have approved plenty of audit studies. But the three hoaxsters do not, generally speaking, come across in the video as all that familiar with IRB rules, which shouldn’t necessarily come as a surprise given that none of them comes from an academic field that has much involvement with human-subjects research. “We didn’t try to publish this in a peer-reviewed journal,” says Lindsay at one point. “We didn’t — I don’t know what the rules of this are.”
The research-ethics experts I spoke with expressed a similar degree of agreement on the question of whether what the “grievance studies” hoaxsters did constituted data fabrication: yes, it did. This is another point where the gap between the average layperson and IRB nerds comes into play. It does feel weird, after all, to read PSU’s letter to Boghossian on this, which says, in effect, We determined, from your admission that you fabricated data for the dog study, that you fabricated data for the dog study. It’s quite clear that the hoaxsters planned, all along, to reveal the fact that they had fabricated data.
But according to the rules, that’s a moot point. “False data was knowingly submitted for publication and was in fact published,” said Hurley. “Intent doesn’t really matter. However, again, had this gone prospectively to the IRB, where the plan was to use deception, which is allowed by our regulations if certain conditions are met — that could have been a different conversation.” Fisher, too, highlighted the fact that the study was allowed out into the world as something that could cause trouble for Boghossian — and which strengthens the case for a data-fabrication allegation. “One of the problems that occurs was they allowed this to be published, and therefore I do think it’s appropriate to look at fabrication of data,” she said. “Because even though the journal now understanding that it was fabricated withdrew the article, it’s still out there in the public sphere. So it’s misleading to those who would take the data as being valid data. So I think investigating this as fabricated data is appropriate.”
Ivan Oransky, who runs Retraction Watch, said in an email, “I don’t see the difference, in terms of the definition.” Data fabrication is data fabrication, appears to be the consensus.
There is certainly a chance that, had Peter Boghossian approached PSU’s IRB and sought permission to conduct this study, it would have caused him and his colleagues all sorts of difficulties. Maybe the study would have been delayed for months. Maybe it would have been blocked entirely. It’s entirely possible that all the worst excesses of overzealous IRBs would have raised their heads. There’s no way to know. But what seems clear is that there was no good reason, in light of what IRBs are for and how they operate, for Boghossian to think his study sat outside the purview of his local IRB.
Letters of support for Boghossian have been rolling in in large numbers since this story broke, penned by academics both renowned — “This strikes me (and every colleague I’ve spoken with) as an attempt to weaponize an important [principle] of academic ethics in order to punish a scholar for expressing an unpopular opinion,” wrote Steven Pinker — and lesser known. Many of these letters echo the complaint that the present investigation is politically motivated, and many echo Boghossian’s concern, expressed in the video, that his livelihood is at risk — that PSU could fire him.
There’s no obvious evidence that possibility is on the table, and if PSU did go that route, it would likely be seen by many academic-ethics experts as highly questionable. Firing someone for failing to file an IRB when one was required would be unusual, argued Schrag. He did mention one case in which University of Queensland researchers were demoted and temporarily banned from research on such grounds, but said, “I am unaware of a faculty member’s being fired for a similar offense, and the Queensland researchers had their rank reinstated after the initial demotion.” Moreover, “in a couple of cases involving serious allegations of abuse in medical experimentation, universities have imposed bans on future research” rather than firing the accused researchers. “The harms involved there were so much more serious than anything Boghossian has done, yet the penalties were still short of an outright firing,” said Schrag. “So yes, I think firing Boghossian would be disproportionate.”
The data-fabrication accusation is probably where Boghossian is more vulnerable, but according to Oransky, while intent doesn’t matter from the point of view of defining data fabrication, it likely would matter as a university debated sanctions. “I think that the intent — what he was trying to do, what he ended up doing — that should all feed into what punishment there is, if any,” he said. Boghossian’s case wasn’t one of a researcher profiting academically off of totally fabricated research — it was, at root, an audit study, and if Boghossian, Lindsay, and Pluckrose had had the foresight to prevent accepted studies from being published, there wouldn’t have been a huge difference between their fabricated data and, say, a fabricated résumé for a more traditional audit study. More broadly, Oransky pointed out that “so few people actually get fired for research misconduct that it actually just seems statistically unlikely.”
So where does all this leave things? Suffice it to say, this is a complicated case. It’s impossible to say that PSU would have imposed the exact same investigation on an equivalent study with a different political valence. But it also seems, with the benefit of a bit of investigation into and knowledge of how IRBs work, pretty obvious that Boghossian was asking for trouble by going ahead and performing this research without at least seeking an exemption. On a lot of the basics, it’s pretty clear cut: His university has some oversight jurisdiction over this research, and is currently exercising it. It might be helpful to separate this out from the broader, noisier debate about the the implications of the hoax itself.