IRB's and Revolutions
A Reply to Tom Steinfatt's Commentary in CRTNET #6403

Published in CRTNET #6407

(See also my subsequent post: IRB as Resource)

I would agree, based on this discussion, that the IRB process is broken on many campuses. Whether the problem is reviewing stuff that nobody needs to review, allowing students to do research without review that faculty cannot do without review, or engaging in prior restraint on publication, but not research, there are clearly problems in the process. The original purposes behind IRB's were laudable and remain so. The fact is that we have, as social scientists, done experiments that should have entailed more meaningful protections of subjects. Experiments involving electronic shocks and the purposeful role manipulations of guard/prisoner experiments have had lasting effects on subjects that experimenters were not prepared to even recognize, never mind deal with. Based on these experiments alone, I think the goal of at least making an experimenter think about what effects a manipulation might have before performing it IS a good thing, and I don't have a problem with a paperwork process that forces reasonable introspection.

The question for me is not if there should be some sort of institutional review of some subset of research, but of exactly what the scope or purview that institutional review should have. A danger in the IRB process, as with any bureaucratic process that entails power over others, is that too much oversight can be far worse than the disease it claims to cure. Once established bureaucratic processes can take on something of a life of their own, inevitably opening the door to abuses based on politics, ideology, personal agendas, petty jealousy's and other things that have long existed in and around University campuses (and all other forms of organization/institution). One didn't need federal involvement for such things to be an issue for early IRB's, and I disagree that the Feds are a problem so much as a symptom. They may be pointed to as the problem (e.g. "we have to do this because ..."), but it is plain from the discussion here that the Federal requirement provides a fair amount of latitude in how IRB's set their purview and express their power.

My first experience of IRB's suggests that, before there was any Federal oversight/requirement, they were already out of control in terms of asserting oversight over types of research that should never have been within their purview. Federal involvement simply opened the door for a broader range of agendas to affect a larger number of campuses, many of which instituted IRB's to satisfy a federal requirement rather than to provide the reasonable circuit breaker which, I believe, was their initial intention. It also provided a convenient way for an IRB to punt their responsibilities in terms of realistically setting boundaries on their purview. In the end, this is what is at issue.

What, then, should be the scope of an IRB? I will contend here that they should be constrained by three issues: "manipulation of subjects", "professional responsibility" and "reasonable prospect of harm". It seems to me that these have been misplaced in the evolving IRB process, and it further seems that a great deal of harm to the research process would be avoided if IRB's only inserted themselves into the process of reviewing specific research proposals after two fairly specific questions have been answered in a particular way by the researcher. I admit to being somewhat out of touch with some of the specifics of IRB oversight these days. I don't know what federal rules may apply now (a posting on the specifics of Federal constraints would be valuable here ... hint hint). Lacking such context, I don't know how reasonable the following statements may be. I'll make them anyway, as they address the issues that should be at the core of an IRB process.

The scope of an IRB's role should be to ensure that the researcher who is performing a "manipulation of subjects":

  1. has thought through the question of potential harm to subjects
  2. understands their personal responsibility for the outcomes of such manipulations
  3. has, where there is any "reasonable" prospect of harm to subjects, included assessments that can detect such harms if they occur.
  4. ensured that, to the extent that there is a prospect of harm to subjects that might open the University to legal action, that the university has had an opportunity to assess the risks and make appropriate decisions

Points three four should be the principle role of an IRB review, and no review should ever be conducted unless an experiment presents risks that require such assessments. This is important. A university needs to be willing and able to deal with any harms that are detected in the course of a manipulation of subjects. It should also have the right, through the agency of the IRB or other mechanism, to say no if the probability of harm is substantial. That, however, should be the limit of an IRB's responsibilities. If research does not involve manipulation of subjects such that there is a prospect of harm to the subject and the researcher is willing to take personal responsibility for the declaration that such is the case, the IRB should simply never be involved in oversight of that research beyond perhaps collecting some trivial paperwork.

I propose then, that the IRB process really starts with two questions which, if answered in specific ways, should excuse the IRB of further responsibility.

The first question is: "Does this research involve manipulation of subjects, either physically, socially, or psychologically?" If one replies "no" to this question, that should be the end of any IRB involvement in the study.

I note here that a survey is not, nor has it ever been, a manipulation. IRB's should simply never be in any game that does not involve "manipulation" of subjects. While I am sure it is possible to construct a survey in which a subject is manipulated, the question does in fact cover the issue, and I'm fairly sure that for surveys, the six sigma value for this question would be "no". Some may object here that surveys need to protect subject privacy, and I certainly agree that such is the case, but researchers were way more than pretty good at preserving subject privacy long before they were required to "disclose" the ways in which data might be used as asked to have subjects sign off on their having read the disclosure. In truth, the only reason researchers have for maintaining any level of specific personal identification information (e.g. name, address, social security number, etc.) is to ensure that one doesn't collect multiple cases from the same subject. There are a variety of double blind ways of managing that without specifically tying identity to the data, and most were widely practiced before IRB's became an issue on most campuses. It is reasonable to set a "you must" standard here, but there is no reason to spend time reviewing proposed studies over the issue.

I have an easier time imagining a participant observation of some sort that involves manipulation of subjects. Expectancy violation studies come to mind as a place where this might be an issue. A "bumping into people" study probably would not pass the standard of this first question, but the question does cover the case. A participant observation that requires the researcher to engage in criminal activity in order to engender trust does not pass this test, but the question does cover the case. Many participant observation studies, including all that I've done, do not, by any reasonable standard, entail manipulating subjects physically, socially, or psychologically. Such studies do not require an IRB's oversight.

I note, in this regard, that many experiments pass this test as well. Consider, for example, the first study that I did that required an extended IRB process: we simply had subjects read interaction fragments and make assessments of the interaction and the participants in the interaction. The experiment was in the interaction fragments, which were varied as a part of the experiment. The materials in the experiment were manipulated, but the subjects never were. Note the key distinction here. Manipulation of materials in the presence of subjects is not the same as manipulation of subjects. Most social science experiments (especially in communication) entail manipulation of materials rather than people. Because subjects are not manipulated in any way, harm is not a reasonable possibility. IRB oversight is therefore not necessary.

This is an important distinction. The social science experiments that have raised the kinds of issues that IRB's were formed to look at, including electric shock and guard/prisoner experiments, were manipulations of people rather than materials. A subject was told "you are a prisoner" and treated that way. A subject was told "you are a guard" and asked to act that way. A subject was given an electric shock if they answered a question in a particular way. Subjects were, in some cases, affected by these experiments. Big surprise (at least in retrospect). Such experiments should be subject to introspection as to potential harms, and IRB's can be a useful mechanism for aiding such introspection and evaluating the results of such introspection. I note, in this regard, that student manipulations are in no way potentially less harmful than manipulations done by "trained professionals". Indeed, they are potentially more harmful because they are not being done by "trained professionals" (e.g. it is less likely, if harm should occur, that it will be recognized as such and properly dealt with). Any school that simply punts the student research question has, in my assessment, a dysfunctional IRB process.

The second question an IRB should ask, and it should only ask it if the answer to the first question was yes, is "Can you, on your personal responsibility, reasonably state that this manipulation entails no prospect of harm to subjects?" If the answer to this question is yes, the IRB should probably excuse itself from any oversight of the study. I note here that there are lots of reasons why a researcher might answer this question with a yes, starting with a reasoned belief that the study cannot harm subjects. Even if, however, the belief is only that the researcher is willing to take personal responsibility for the implications of being wrong, the IRB should probably get out of the way. If a reasonable cross check is desired at this point, a witnessed sign-off that the study has been reviewed and that there is no reasonable prospect of harm to subjects by a colleague who is not a party (direct or indirect) to the study should suffice to take the IRB out of the picture.

If the answer to the question is yes, however, the IRB can serve an important role in:

  1. helping the researcher to fully assess the risk of the experiment.
  2. ensuring that there are reasonable fail safes in place that will detect harms if they occur.
  3. ensuring that resources are lined up to handle remediation of problems if they occur.
  4. ensuring that the Universities interests are protected, up to and including being able to say no if the risk to subjects, and the university, seems too great.

The problem is not that IRB's are a bad idea. It is that they have gone far beyond their reasonable purview. If a revolution is what it takes to correct that, count me in, but I'm not convinced that the problem cannot be resolved with reasoned dialog.

Davis Foulger
Oswego State University
Oswego, NY