Theory and Data: Tim's Typology of Research #7550

Davis Foulger
As submitted to CRTNET, 6/12/2003

At some risk of getting shot at here, I offer a detailed analysis of Tim's typology of the "applications of theory". I'm deeply troubled by this typology, which confuses theory, application, and data in ways that make it all but useless for any purpose but reifying current patterns of acceptance in a fairly large number of conference divisions and journals. Such reification may have value, especially for those who'd like to have a predictable path to tenure. It has been bad for our field, however, which has been increasingly balkanized by a divide between quantitative departments that are intolerant of the kinds of research and theory building at the "lower" levels of Tim's typology and "qualitative" departments that react by minimizing the quantitative elements of their programs. I think the future of our field would be better served by a general recognition that both approaches to research have a basis in theory and value to future theory. You can't test theory unless you have theory to test. Unfortunately, Tim's typology belittles a number of the critical processes through which theory is built.

Indeed, the typology can be criticized with good examples at every level. Working from the "lowest" level to the "highest":

7. "A-theoretical research". I'm not quite sure what makes a-theoretical research "dust-bowl empiricism", but let me propose a classic example of it: the research (many years of it) that led to Darwin's Theory of Evolution. Darwin's research, at its base, wasn't much more than inventorying the plant and animal life at a variety of locations in and around the Pacific Ocean, with a little human ethnography thrown in for good measure. His method during the journey of the Beagle was, and still is, a common form of research, and I doubt that researcher's who are collecting data on the diversity of life in tropical rain forests or trying to figure out the linguistic patterns of whales and dolphins would have much sympathy for the label "dust-bowl empiricist" (raincoat or wetsuit empiricist, perhaps :-). Bottom line, however, such relatively theory free data collection is the baseline in which science starts, and we should be encouraging more of it, especially in a relatively young field like communication that is far from documenting all of the areas in which we should have research questions. Thankfully, we have a large number of scholars in our field who do just that, with an increasing number operating out of qualitative research traditions. Relegating such research to a "dust-bowl" seems like a good excuse to ignore data that might falsify current misconceptions about what communication research can and should be.

6. "Pseudo-Theoretical Research". This is an asystematic category that includes at least two very different kinds of research within an ill-defined container. What exactly does it mean to look like theory, but not be theory? At the face, this sounds very subjective. In any case, lets consider each category separately:

(a) "Harking", as described, is simply a bad research practice. It subverts the principles of hypothesis based research for the purposes of publication. And there lies the problem. We persist in only publishing the significant results of successful studies, and occasionally even punish those whose results fail on both counts (denial of tenure, etc.). Our current system actively encourages such practices, and with the exception of theses, dissertations, and an occasional conference paper in which the hypotheses are fully established in advance, its almost impossible to separate the white hats from the black hats. Harking is, in some sense, a post priori accusation that can be aimed at results that seem to good to be true.

(b) Employing "an overly liberal definition of theory", by contrast, seems to be a dismissive swipe at some of the fundamental elements of theory. Theory construction is a process of recognizing, documenting, and testing patterns. Labeling, defining, and contextualizing patterns are all important parts of that process. A "label" (presumably accompanied by a definition) is a formal statement that we are declaring the existence of a pattern. A "typology" is a pattern of labels that declares, at minimum, a relationship (at least of difference) between labels. A "model", in theory, makes a more complex statement of relationships. Harking back (pun intended) once again to Darwin, his theory of evolution is a set of labels for the species, species characteristics, the the relationships those characteristics have to each other and the environment the species have adapted to. The result is the beginning of a systematic typology of those characteristics and a model of how those systematic variations might have occurred. In communication, the same can be said of base theoretic constructs of information theory, cybernetics, general systems theory, Aristotelian Rhetoric, and the initial presentation of almost every theory of communication we currently teach or test, including cognitive dissonance, self perception, and communication apprehension. None of this is an application of theory. This is theory building at its most fundamental, and to be dismissive of it is to be dismissive of future new and useful communication theory.

5. "Post hoc interpretation of results in terms of theory". I suppose this can be viewed variously as "honest" harking. The researcher didn't find what was expected, but there are patterns in the data, and it is possible to make consistent explanations within the context of existing theory (or mild extensions of existing theory). This kind of thoughtful reaction to results that don't match our expectations is exactly the kind of reaction to good theory is built on. Einstein's theory of relativity follows directly from attempts to make sense of results that didn't fit the predictions of existing theory. Others tried to explain those results in terms of existing theory. Einstein did too, but needed to add a new conceptual paradigm (in effect, point of view), in order to make sense of the anomalies while preserving the predictive value of prior theories. With Einstein's "mild extensions" in place (e.g. relativity), the theories of Newton and others continued to make sense, but a growing inventory of experimental anomalies were brought under control. Some interesting new predictions were made possible as well, and some of those predictions have even proved to be testable over the course of the last 100 years. Einstein's theory continues to work in part because new anomalies are minimal and in part because a theory with which it was once contrasted (Quantum theory) has turned out to be its complement (e.g. two competing theories appear to work better if we accept both). It is this kind of analysis that Kuhn speaks to with the concept of paradigm shift. The most important results for building theory are the observations that don't make sense. They expose the leaky seams in the theories we test. Unfortunately, our publication processes frequently discriminate against these kinds of results. We don't publish "failed experiments" that lack significant results or run counter to expectations. This is unfortunate, because you can't fix leaks if you don't know about them. Indeed, it creates problematic biases for meta-analysis. When experiments work under some circumstances but not others, but the non-working circumstances are never published, metaanalysis may see generality in results that are highly contextual.

4. "Pre- and Quasi-"theoretical research. Two categories are documented here: "work designed to lead to theory" and "work that is theoretical but not a formal theory". I'm not quite sure, from the description, how this category differs from "pseudo theoretical research" and "post hoc interpretation" except for its apparent detachment from failed experimental results and the invocation of the idea of not quite formal theory. What constitutes the "formal theory" that this is something less than. Do a theory need to be axiomatic to be "formal"? Does it need to have testable hypotheses and the possibility of falsifiability? Does it merely have to entail a sufficiently coherent presentation of pattern such that it can be tested in practice. Or is enough that it usefully "probes" (McLuhan's term) our understanding of how the world works. Is Kuhn's theory of paradigms a "quasi-theory". I can certainly make a case that it fits based on the description of this category. Kuhn's theory (which describes communication and psycho-social processes in the development of theory) has repeatedly proved to accurately describe the real world, but I would be hard pressed to come up with a way to test it experimentally.

3. "Demonstrations of a theories predictions, applications, etc." To my mind perhaps the most important element of theory building, and certainly the one for which researcher's and theoreticians are more likely to become well known. Newton's theories are powerfully demonstrated with the example of cannonballs fired from a mountain. Einstein's theories are powerfully demonstrated with thought experiments that demonstrate the relativity effects of a point of view. Festinger's theories (and he is nothing if not fecund in his proposal of theories) make sense because he demonstrates, with effective logic's and rhetoric, the path by which cognitive dissonance, distraction, and other mechanisms can effect the way we feel about things. A theorist must, if they are to be effective in getting other people interested in the patterns they find, be a good storyteller. Unfortunately, the same can be said of con artists. I'm all for a good story, but I want it to be well grounded in descriptive labels, sensible definitions, useful typologies, reasonably abstracted models, and real world data (e.g. with observations of people who are dealing with work and the problems of daily life taking precedence over experiments on 19 year olds who are still figuring out what real means). It is probably good to be suspicious of storytellers. But it is important to encourage their stories so long as they lead to useful theory.

2. The next level, theory testing (e.g. "testing a theory"), moves us beyond theory and into generating data that is relevant to a theory. Theory testing is important, but we overstate its importance when we claim that such testing is either theory building or an application of theory. Theory testing can enable theory building, especially when our hypotheses fail and we are forced to come up with an explanation of what happened. At this point we advance to what I would call a higher level of theory building: finding patterns that extend the existing theory (what this layered typology calls "pseudo-theoretical research"), finding other theoretical perspectives that make sense of the results ("Potshot interpretation of results in terms of theory") or identifying a new way of thinking about the problem (e.g. "Pre- and Quasi-"theoretical research). It is only when we think about and try to induct principles based on data (apparently at a lower level of this hierarchy) that we do theory building. When we use experiments to refine data deductively, we are doing tort theory: Judicial decision making (usually at the .05 level) about the fine points of theories that are fractally complex at their boundaries and therefore infinitely refinable.

1. I love critical tests. They have the immediate entertainment value of "Ali versus Foreman" style heavyweight prize fights. The alleged competition between theories makes critical tests is windfall for the researcher, for whom a successful critical test has better odds than other papers of achieving top paper status at conferences and acceptance to journals. I've done this kind of experimental research myself and reaped the benefits, but I question whether it really deserves the prize of "highest level application of theory". Many "critical tests" are more (to borrow a carnival phrase) "bark" (e.g. hype) than reality. I'm not worried so much about straw man theories here (obviously bad practice) as I am about a more fundamental problem with "critical tests". Critical tests take an "or" approach to theory (e.g. only one theory can be right), but the reality of theory, especially, in social science, is more often "and" (e.g. allegedly "contending" theories are actually complementary). The classic example of this in science is complementary relationship of relativity and quantum theory. Early competition between these approaches spawned the famous Einstein quote "God does not play dice with the universe", but light exists, as best we can understand it, both as continuous waves and as packets (quanta). In my "critical test" 25 years ago, there were main effects supporting both theories, and secondary indicators that suggested that one was a mechanism of the other. It seems to me that I've seen very few "critical tests" that wouldn't have been better cast as an "and"; as a refinement of the fractally complex (and therefore infinitely refinable) boundary between two complementary theories. Of course casting critical tests this way simply turns them into a playground for doing the same sorts of "tort theory" that is associated with simply testing a theory; with a useful difference in the exploration the more complex boundary created by the competing principles.

Criticism is, of course, easy, and such really has not been my intent here. What I hope will come out of this discussion is a better appreciation of the interaction between data, theory, testing, and application; and of the importance of having a place for all in our conventions and journals. Theory needs criticism as well as testing, and the best way to develop theory is to have places were we put our observed patterns, labels, definitions, typologies, models and prospective axioms out for inspection and critique. When we turn our outlets for developing, sharing, reviewing, and critiquing communication theory into collection points that only accept successful experiments that test and refine existing theory, we leave only books and web sites as the venues of new theory development, and while books provide much needed space to describe theory, making books the only venue for such development reduces the level of interaction that helps authors to refine their work as it develops.