Ethics and morality support and exist in a continuum with a variety of systems of social constraint. Because other systems of social constraint, including laws, rules, and precedents, formally document resolutions to most of the simple and/or obvious ethical cases the teaching of ethical decision making frequently operates within the fractally complex boundaries between established ethical principles, undesirable harms, and human values. The primary subject of ethics instruction is not, then, simple rights and wrongs and essential ethical principles, but the ways in which we make reasonable decisions when principles compete, harms are interdependent, and we are forced to choose between less than fully attractive decision alternatives. The best ethical cases expose the complex boundaries in ethical decision making such that students think about how they make forced choices between competing ethical principles, undesirable harms, and their most cherished values. If we want to test the efficacy of ethics instruction in the face of such fractally complex decision spaces, we must develop test instruments that allow us to explore that complexity. One such possibility is proposed here.
Laws, rules, precedents, and ethics each play a particular role in resolving the same fundamental problem: constraining the behavior of individuals and social collectives. The behavior of a person or social collective can, and sometimes does, cause harm to other people and/or social collectives. Laws, rules, precedents, and ethics each exist to reduce the incidence and severity of such harms by discouraging or otherwise constraining the behavior that is associated with them. Figure 1 depicts a cycle of social constraint that describes, in general terms, the process by which harmful behavior is constrained. Behavior inevitably has effects. Where those effects are harmful or otherwise problematic, either to the individual or to others, it is likely that steps will be taken to discourage the recurrence of that behavior. Laws, rules, precedents, and ethics each represent somewhat different paths to such constraint. Laws, in general, are the most formal of these options, and are generally supported by enforcement agents and a judiciary within the context of government. It will often be the case that the most obvious and common harms, and the most straightforward to enforce, will become a matter of law. Rules, in general, are less formal. While they are sometimes promulgated by government, they may be encoded or negotiated within the the context of almost any organization or relationship. The harms they cover will often be obvious, given the mission of the organization, but often will not be as readily enforceable as laws usually are.
Figure 1: The cycle of social constraint.
Precedents, by contrast, are generally not formally encoded, but are instead documented as decisions made within the scope of particular cases. Precedents remain formal and, within the scope of particular decisions, enforceable, but the formality is a function of the process by which they are arrived at rather than a formal statement that constrains behavior. The process, which generally leads to a binding and documented decision made in a courtroom, review board, or other decision venue, sets a precedent only insofar as it can be expected to guide subsequent decisions. The harms covered in precedent are less obvious, and more complicated, than those associated with rules and law. Decisions concerning these harms often involve collisions of multiple principles, multiple possible harms, and finding a balance between competing principles, interests, and problems.
Ethics and morality are a more difficult territory. At a superficial level, they are concerned with simple dichotomous principles like right versus wrong, good versus bad, virtue versus defect, and moderation versus excess. Law, rules, and precedent are also concerned with these principles, however, and cover the simple and obvious cases. Each entails formal enforcement mechanisms that are not associated ethical and moral decision making. Ethics enforcement, by and large, is informal. Unethical behavior may cause others to complain, but generally won't have legal implications unless and until it crosses the line into illegality. The subject matter of moral and ethical decision making may be the same kinds of collisions of harm, principle, and boundaries that are the subject matter of precedent, but the focus of ethics is proactive rather than reactive, and necessarily emphasises difficult issues that are not covered by existing law, rule, and precedent.
Primary, among these difficult cases, are collisions of principle or harm: situations where two or more desirable principles are opposed such that one can only be satisfied at the expense of another; two or more undesirable harms intersect such that avoidance or reduction of one harm necessarily creates or worsens another. Annette Bening's character in the movie "The Siege" expresses this well at a celebratory moment early on in the film (I paraphrase): "Telling right from wrong is easy. It's telling which wrong is more right that's hard." In a perfect world there would be no difficult conflicts of principle or harm. It would be possible to do good without ever causing inadvertent harm. It would be possible to avoid all harms. Reality, however, allows for no such perfection. Any minimally complex collision of different forces, even forces that would seem to support and reinforce each other, results in complex boundary conditions that will seem inconsistent and/or unpredictable.
Figure 2: Layered Sand subjected to three linear stresses, including a fixed end, a shaped base, and a movable end that is pushed in to emulate tectonic motion. Note the complex "fractal" patterns that occur at the intersection of these three forces. Photograph, of an exhibit at the Rose Planetarium of the American Museum of Natural History in New York, by the author.
Consider, for instance, Figure 2, a photograph of a very special sort of "sand art" which researchers use to model the behavior of "soil" when it is subjected to the kinds of stresses associated with the collision of tectonic plates (the forces that cause earthquakes). Layers of colored sand were poured evenly into a large glass walled container. Deep in the container we see one of the three forces, a subducting mantle. One end of the container is fixed in place. The other end of the container can be pushed "in" such that one continental mass is pushed into, and up over, the subducting base of the other. The result here, as in real world collisions of tectonic plates, is the formation of mountains (or at least a mountain) that can be seen in Figure 2. The more interesting result, however, is the patterns the sand forms as these forces interact. The layers of colored sand don't mix together. Instead they shape themselves into into wildly varying "fractal" patterns. A pattern is regarded as "fractal" when it is infinitely complex (i.e. no matter how closely you resolve it, there is always another layer of complexity to be found) and self-similar (i.e. patterns recur within and between levels of resolution). Fractal complexity occurs routinely in nature when "simple" forces are set in opposition to one another. Indeed, Figure 3 shows a fractal pattern produced by shining light through a pyramid of four shiny round Christmas ornaments.
Figure 3: A complex "fractal" pattern created by shining light through a stack of four Christmas Ornaments. Source: Four Spheres (Sweet, 2000).
Principles and harms are the forces of morality and ethics. Considered in isolation, ethical principles are easy to live by; harms, insofar as they can be anticipated, easy to avoid. When pushed together, however, these simple forces interact in the same manner as simple objects and forces in the real world. They interact to create "fractally" complex decision spaces. Consider, for instance, the four principles protected by the first amendment of the U.S. Constitution: freedom of speech, freedom of assembly, freedom of religion, and freedom of the press. At first glance these freedoms appear as similar and compatible as four identical round and shiny Christmas ornaments. The freedoms of speech and press are very similar, different only in that the former protects an individual speaker and the latter protects a newspaper and its staff. The freedoms of speech and religion are also similar insofar as religion is a kind of speech and speech expresses belief including religious beliefs. Speech enables assembly and assembly provides a forum for speech. Religous worship often requires assembly and assembly enables religious expression.
Yet for all their similarity, they do sometimes collide, both with each other and with other principles that are protected in constitution, law, and precedent. Indeed, the decisions that surround these collisions appear to be fractal in their range, complexity, and apparent self-similarity. A recent report from the Congressional Research office documents "major" exceptions to first amendment protections of speech and press (Cohen, 2001). Ten classes of exception to first amendment protection, some initiated in statute but all supported by court decisions, are documented in the report. All entail collisions of principle. In many cases speech collides with the principle of protecting and nurturing our children. In others, the collisions are with public safety, the protection of public lands, the protection of personal reputation, and the protection of public health. Speech that is reported as having no protection includes obscenity, child pornography, and the advocacy of the use of force or law violation. Yet it is easy to identify cases where these such speech has been protected or, at least, not prosecuted. Speech that has less than full protection includes defamation, speech deemed harmful to children, commercial speech, broadcasts on radio and television, and public employee speech. Again, it is easy to identify exceptions, and the report does. Two additional classes of complex exception are noted, each of which has at least two conditions attached to the constraint.
The report is hardly complete. It is easy to identify major exceptions that the report does not discuss. Boston, New York, and other cities have, in recent years, witnessed collisions of speech and assembly as the organizers of St. Patrick's Day parades have sought, under the banner of Freedom of Assembly and Freedom of Religion, to exclude gay organizations from marching in their parade. Gay advocates have sued for the right to participate under the banner of freedom of speech. Court decisions have swung both ways on this issue, but assembly has more often won out over speech, generally under the presumption that the same speech could occur within the context of other forums, including gay rights parades. Similar conflicts between speech and assembly have swirled around protests at abortion clinics. The courts have generally split the difference in these cases, with freedom of speech protected by allowance of protests outside clinics, but freedom of assembly protected by the creation of a protest free buffer zone around the clinic. These collisions of speech and assembly seem to have grown more frequent in recent years. Most recently, the speech versus assembly debate has erupted around Internet parental control software that allows selective exclusion of content from a browser. Defenders note that such software doesn't prevent speech. It simply allows people to decide not to view it. Opponents argue that such protections can as effectively muzzle a speaker as an outright ban on speech might. This collision is not yet an issue of law, rule, or precedent. It remains largely a matter of ethical decision making and the balance of principle and harm.
The notion that the primary subject of ethics is the boundary between colliding principles is hardly a new one. Machivelli's ethical philosophy is rooted in the notion that principles and human conditions collide, that virtue is not always possible in the face of such conflicts, and that the conflicts should be resolved with the minimal flexing of ethical absolutes. Hobbes' philosophy seeks justice and mutual accommodation in a world in which there is no clear objective truth. Schopenhauer's philosophy asks us to find the least painful middle ground between unavoidable harms. What is different, perhaps, is the recognition of just how complex the borders between ethical principles can be. When speech and assembly collide, speech will sometimes win, assembly will sometimes win, and each will sometimes achieve a partial victory. Step inside a victory for either, and one will find a whole new battle over the specifics of what speech is and is not allowed within the context of the decision. Neither law nor rule nor precedent can ever be detailed enough to cover all of the variations that we will encounter as speech and assembly collide. Inevitably we are left with some level of ethical decision making at infinitely complex boundaries.
The same will be true for any collision of harms and/or principles. We all agree that murder is wrong, but most of us can also agree on circumstances where, whether we call it war, execution, capital punishment, self defense, extreme prejudice, or something else, we feel that killing is acceptable and perhaps even desirable. Accept any of these circumstances, and one can find exceptions inside it. Step into any of those exceptions, and one will continue what is an infinite regression into ever more complex details.
These collisions are the familiar landscape of our ethics classrooms, and our case studies are one of the primary tools through which we explore these collisions. Indeed, such collisions of principle, harm, and/or the interests of different groups are inherent to all of the several dozen case studies that the author has reviewed in preparing this paper (Leslie, 2000; Various, 2002). Discussion of these case studies, and real world decision making in general, is complicated by another factor that exacerbates the problem of collisions of principle: the nature of human values.
A much overlooked "ethical philosophy" is the values-oriented philosophy suggested by the work of Milton Rokeach (1972). In this work individuals were asked to rate and compare a variety of human values on two scales, as shown in Table 1. Two things stand out from this work in the current context. First, Rokeach's research demonstrates strong relationships between the way people rank values and the nature of their beliefs and behavior. Second, and more importantly, small differences in rankings can translate into large differences in behavior. The usual procedure associated with the Rokeach value scales is to ask people to engage in a "forced choice", with values ranked from "most important" to "least important". The results of a forced choice can differ substantially from more conventional interval scales on which we might rate something, for instance, on a five or one hunred point scale from really valuable to not at all valuable or important to irrelevant.
Table 1: The Rokeach Value Scales.
It may be, for instance, that a person asked to rate "a sense of accomplishment" and "family security" on a scale of 1 (really, really important) to 100 (not even a little bit important), would rate both values as a 1. When forced to choose, however, that same individual might put "a sense of accomplishment" ahead of "family security". This may seem trivial. If both values are equally important, the fact that a person might marginally favor one value over another when forced to choose could easily seem unimportant. When, however, people are faced with real world trade offs between competing principles, conflicting harms, and the interests of different groups, microscopic differences in the valuation of one principle or harm over another can result in large differences in the nature of a decision and/or the perception of that decision by others. Where two people both agree that two principles are extremely important, but each ranks a different principle as being slightly more important than another, any behavior that treats one principle as more important than the other may seem like a betrayal of principle to the other party.
The subject of ethics instruction is not, then, the study of simple rights and wrongs, essential ethical principles, or simple moral decision making. It is, rather, the study of how one makes reasonable decisions in the face of the conflicting rights and wrongs, competing principles, interdependent harms, and differing perceptions of what principles or harms should be favored over others when one is forced to make difficult moral and ethical choices. It is the study of the fractally complex boundary conditions we face when principles compete, harms conflict, and different stakeholders have different views about how one chooses between two or more equally important things.
Exposure of these fractal boundaries is the essence of a good ethics case. Indeed, classic ethical dilemmas pose questions in which there is no absolute right answer, and in which the answer can easily change as we learn more about the situation. Schopenhaurer's porcupines can never get too close, lest they injure each other with their quills, or separate too far, lest they freeze. Heinz, the husband in Kohlberg's cancer therapy dilemma, cannot save his wife without stealing a drug that he cannot afford and cannot remain honest without his wife dying. The length and detail associated with a case may, as is shown in Table 2, make that case more or less appropriate for a particular purpose, but it does not make it a good or bad case. The rooting of a case in factual reality may make a case more or less appropriate for a particular purpose, but it does not make it better than a fictional case for the purposes of teaching ethics.
|Précis||A brief description, usually no longer than a paragraph, that provides a summary of the ethical conflict. Ideally it identifies who needs to make what decisions under what circumstances while exposing genuine boundary between two or more competing principles, conflicting harms, etc.||
||Easily and quickly read, provides enough detail to provoke discussion and debate, but without stealing significant time from the discussion.||Missing details will be filled in by participants. Details will not always be imagined in the same way.|
|Descriptive||A longer narrative description, usually no longer than a page, that describes the context of a problematic moral or ethical decision. Ideally it recreates, as much as possible in the space available, the decision environment a person would actually face in making a decision.||
||Gives a detailed view of why the decision is problematic. Usually exposes at least two conflicting perspectives such that students can understand why any decision entails some level of harm.||Can present only one or two perspectives. Long enough as to preclude use in many kinds of testing.|
|Thick Descriptive||An article or book length narrative description that describes the circumstances of the ethical conflict in detail. Ideally it identifies all of the stakeholders in the decision, the perspectives and interests that each brings to the conflict, the options that each stakeholder has, if any, and documentation of just how serious the harms associated with each decision actually are.||
||The detailed and multi-perspectived description gives readers a detailed understanding of why the decision is problematic, and who wins and loses when various decisions are made.||Difficult to use unless the reading is completed in advance of any testing or discussion.|
|Layered or Serial||A series of brief descriptions, each no longer than a paragraph, in which each new description systematically extends and complicates previous descriptions, generally by adding new principles or harms to the existing mix.||
||Allows participants to observe how their ethical decisions change as the balance of principles and/or harms change.||
In addition to the difficulties of Précis cases, requires controlled distribution.
Table 2: A comparison of case study types.
What matters in an ethics case is that it engages students at the boundaries of conflicting principles and harms,, forcing them to think about how they might resolve the boundaries such that harm is minimized or eliminated and principles are maintained unbent. They will not, by and large, be able to do this in well constructed ethical cases, and it is important that students be able to recognize this, the principles they are picking in preference to others, the harms they are allowing as they minimize others, and the precedents that they are setting for themselves and others when they do so. It is also important for them to understand that such decisions are sometimes unavoidable, and that as long as they have done their best to minimize harm, that the harms that do occur in these situations are not their fault, but rather the fault of the situation.
The author has turned, increasingly, to serial or layered cases (for examples, see Foulger, 2001a; Foulger, 2001b) as a means of systematically exploring these complexities. The idea, in such cases, is to start by creating a simple scenario that opposes fundamental principles, and then systematically complicating it by either bringing new forces into play or strengthening one or another of the existing principles. Foulger (2001a) begins by putting the right to privacy in opposition to freedom of the press (with other principles are put in minor opposition). Subsequemt complications entail issues of truth and supposition, unfounded accusation, increased invasion of privacy, outright fabrication, and exploitation of events for personal gain. Foulger (2001b) puts employee privacy in opposition with the right of employers to monitor resources, brings that opposition into sharper contrast, and then introduces such issues as theft of proprietary assets, whistle blowing, truth, and the power of money. In the process the student is exposed to an increasingly complex set of principles and harms and an increasingly difficult set of ethical decisions. Not surprisingly, given the nature of forced choice between equally valued principles, the level of agreement among otherwise well thought out decisions declines as the number of opposed principles grows. At some point, there simply is no absolutely right answer.
All of which creates a very large problem when we attempt to measure the efficacy of our ethics instruction using ethical cases and decision making. While there may be obviously wrong solutions to these conflict of principles and harms, there will often be more than one right answer, and the right answers in many cases may simply be the "wrongs" that are "more right". Multiple right answers, answers that are more right than other answers, and wrong answers that, in being more right than other answers, are therefore right are not the normal province of testing and educational measurement. Such would appear to be the reality of testing and measurement in ethics, however, and if we are to measure the efficacy of ethics instruction, it would appear we have no choice but to create a new kind of testing and measurement system that allows us to systematize a fractally complex decision and measurement space.
This paper does not claim to have final answers for how this can be done. Certainly there will be no way to do this using conventional paper and pencil instruments that will not introduced the vagaries of human grading of open ended questions. Standardized measurement of a fractally complex decision space is going to require an adaptive testing regimen. It seems likely that such a regimen can only be achieved on a standardized basis through the use of software that is capable of tracking a student's ability to parse a case into its essential elements and follow the students decision patterns as they move through a layered case. It is proposed that one might build a useful standardized instrument for the measurement of the efficacy of ethics instruction by creating a computer-based testing and collaboration environment which, if properly developed, would allow a new kind of adaptive open ended testing. Key components of such a system would include:
An exam like this could be used to look at the complexity of a student's thinking and ability to adapt such thinking to the increasingly complex exigencies of the layered case. Among the things that might be measured:
In practical use, students might take exams by either scheduling the class into a computer lab for the testing period or by allowing students to take the exam on-line on a "take home" basis. Multiple tests might be available through the system at any given time such that students in the same class might consider different cases, and the open ended nature of the answers would make it difficult for anyone to put out a "cheat sheet" of right answers. Even if someone did, however, those same open ended answers would make it easier for the system to detect attempts at cheating by looking for identical answers.
There is a presumption, in this proposal, that such testing be done on-line, and for the kinds of large scale testing of the efficacy of teaching ethics across the curriculum that one might anticipate, this is a reasonable presumption. So long as one is willing to give up the posttest collaborative answer assessment, nothing prevents a system like this from presenting and testing cases from a CD-ROM. Indeed, small changes would make the system appropriate to programmed instruction.
Ethics and morality supports, and exists in a continuum with, a variety of systems of social constraint. Because other systems of social constraint, including laws, rules, and precedents, formally document resolutions to most of the simple and/or obvious ethical cases, the teaching of ethical decision making frequently operates in the fractally complex boundaries between established ethical principles, undesirable harms, and human values. The primary subject of ethics instruction is not, then, right, wrong, and essential ethical principles, but the ways in which we make reasonable decisions when principles compete, harms are interdependent, and we are forced to choose between less than fully attractive decision alternatives. The best ethical cases expose these fractally complex boundaries in ethical decision making such that students are forced to think about how they make forced choices between ethical principles, undesirable harms, and their most cherished beliefs and values. These boundaries are not a matter of moral relativism. They are the realities we all face as we resolve collisions between ethical absolutes. If we want to test the efficacy of ethics instruction in the face of such decision spaces, we have no choice but to develop test instruments that allow us to explore that complexity. One such possibility is proposed here.
The author would like to make special note of the insights provided by Joan Dyer while this paper was being conceived and written. It would not be the same paper without her helpful suggestions. The author would also extend thanks to the "IBMPC" computer conference team, including Jerry Waldbaum, Dave Chess, John Alvord, and Gloria Whittico, who created, interpreted, and enforced the IBMPC RULES during the formative years of the new medium. The insight that the creation, interpretation, and enforcement of rules occurs at the fractal boundary of simple ethical principles is rooted in observation of their efforts.
Cohen. Harry. (2001). Freedom of Speech and Press: Exceptions to the First Amendment. CRS Report For Congress. Congressional Research Service, The Library of Congress. http://www.fas.org/irp/crs/95-815.pdf.
Foulger, Davis (2001a). Serial Case One. http://evolutionarymedia.com/cgi-bin/wiki.cgi?SerialCaseOne.
Foulger, Davis (2001b). Serial Case Two. http://evolutionarymedia.com/cgi-bin/wiki.cgi?OswegoSerialCaseTwo.
Leslie, Larry A. (2000). Mass Communication Ethics: Decision Making in Postmodern Culture. Boston: Houghton-Mifflin.
Rokeach, Milton. (1972). Beliefs, Attitudes, and Values. Jossey-Bass.
Sweet, D. (2000). Four Spheres. http://webs1152.im1.net/~dsweet/Spheres/.
Various. (2002) Ethical Cases. http://evolutionarymedia.com/cgi-bin/wiki.cgi?EthicalCases.
Zwick, Edward (Director). (1998). The Siege. Twentieth Century Fox.