Lost in “Conceptual Space”:

Metaphors of Conceptual Integration 

L. David Ritchie

Portland State University

Ritchie, D. (2004) Lost in space: metaphors in conceptual integration theory. Metaphor and Symbol, 19, 31-50.

Abstract

            Conceptual Integration Theory (Fauconnier & Turner, 1998; 2002) is re-examined in the light of recent criticisms (e.g., Gibbs, 2000; 2001).  It is argued that the assumption of four independent “cognitive spaces” enters the model primarily as an entailment of the “space,” “blending,” and “construction” metaphors, leads to unnecessary ambiguity, and works against statement of the theory in a form that supports derivation of testable hypotheses.  Several examples from Fauconnier and Turner (2002) are analyzed to show that they can be interpreted more simply, without need of four separate “spaces.”  Suggestions are made for reformulating Conceptual Integration Theory without the problematic metaphors.  


Lost in “Conceptual Space”:

Metaphors of Conceptual Integration

            Conceptual Integration Theory (CIT), also known as Conceptual Blending Theory (Fauconnier & Turner, 1998; 2002) has been recognized as a powerful model of language processing and a potential solution to a number of problems in cognitive theory (Grady, 2000; Gibbs, 2000; Harder, 2003).  It promises an integrated model of cognitive activity that will combine explanations of linguistic creativity with explanation of other language behavior, as well as of various products of human imagination (Gibbs, 2000).  Yet Conceptual Blending Theory has also been criticized because it has not been made clear how aspects could be formulated for testing and potential falsification (Gibbs, 2000; 2001) and because it seems to introduce needless complexity into relatively simple linguistic processes (Harder, 2003). 

            In this essay I examine crucial assumptions of Conceptual Blending Theory, including the assumption that integration always involves at least four distinct conceptual “spaces,” including a “generic space” containing elements common to the two input spaces.  I examine the entailments of the central metaphors, “mental spaces,” “conceptual packets,” and “conceptual blending,” and show that these metaphors obscure processes specified by the model.    I review Fauconnier and Turner’s (2002) responses to some of the criticisms of Conceptual Blending Theory and show that the principal criticisms have not been adequately addressed.  I show how typical examples used to explain and justify Conceptual Blending Theory can be readily explained in less complicated and more straightforward terms.  I conclude that the metaphors used to explain conceptual integration require close scrutiny to separate metaphorical entailments from the actual requirements of the theory.  

Conceptual Blending:  The Model

            Fauconnier and Turner (2002) explain Conceptual Blending Theory in terms of “mental spaces,… small conceptual packets” connected to “long-term schematic knowledge called ‘frames,’” as well as to “long-term specific knowledge” (p. 40).  The mental spaces are illustrated by circles, with relevant contents either displayed iconically or listed in abbreviated form (see Figure 1).  The model posits a minimum of four mental spaces:  Two input spaces, a generic space that contains what the two inputs have in common, and a blended space that contains some elements from each input space.  The blended space may also contain additional elements (“emergent structure”) that can include new elements retrieved from long-term memory or resulting from comparison of elements drawn from the separate inputs, or from elaboration on the elements in the blended space (“running the blend”). 

Figure 1
fig.htm

This diagram created using Inspiration® 7.5 by Inspiration Software®, Inc.


            Example:  A monk climbing a mountain.  Fauconnier and Turner (2002) frequently return to a logical puzzle drawn from Koestler, in which a monk climbs a mountain on one day, beginning at dawn and arriving at sunset, then returns down the mountain on a subsequent day, again beginning at dawn and arriving at sunset.  The task is to show that there is some place along the path that the monk occupies at the same hour of the day on the different journeys.  The solution preferred by Fauconnier and Turner is to imagine the monk walking both up and down on the same day, a scenario constructed by taking elements of two separate input spaces, one for the monk walking up the mountain on one day and one for the monk walking down the mountain on a different day, and “blending” them into a single image (blended space).  A “generic space,” containing everything the two “input spaces” have in common (the monk, the mountain path, and a day beginning at dawn and ending at sunset), is required to support the blend (Figure 1).  In the blend, the mountain slope and the two separate days are fused into a single mountain slope and a single day, but the two monk images cannot be fused, because they move in opposite directions, so they map into the fourth, “blended space,” as two separate individuals.  When we “run the blend,” by imagining the two individual monks progressing along the path, we see that they must inevitably meet.

            Example:  Margaret Thatcher for President.  Fauconnier and Turner (2002) cite a commonplace observation among conservative commentators in the early 1990s, that the United States needs a Margaret Thatcher – but that she would never get elected here because of opposition from labor unions.  Fauconnier and Turner claim that, to comprehend this argument, we must imagine Thatcher running for U.S. president and develop this scenario in sufficient detail to perceive the obstacles to her election.  “The point is made by setting up a situation (the “blend”) that has some characteristics of Great Britain, some characteristics of the United States, and some properties of its own” (p. 19).  In this example, the generic space contains elements common to Margaret Thatcher and the U. S. presidency such as “world leader.”  The input space corresponding to Margaret Thatcher contains information about Thatcher, including the qualities that endear her to conservative commentators, along with her famous struggles with British labor unions.  The input space corresponding to the U. S. presidency contains information about that office, including the influence of labor unions in electoral politics.  The relevant information is mapped from the two input spaces into a fourth, and entirely new, blended space.  When we “run the blend” we first perceive that Margaret Thatcher would make a great president because of her fabled qualities of toughness etc.; then we perceive that she would fail as a presidential candidate because the unions would block her candidacy. 

            Approaches to Conceptual Integration.  It seems obvious that examples of reasoning such as the monk puzzle and the “Margaret Thatcher for President” quip require that disparate concepts (including images, schemata, and narratives) be combined in some form of conceptual integration.  However, conceptual integration could be accomplished in many ways.  Consider first the degree of connectedness:  Integration might mean that separate concepts become connected or linked in memory, just as one might connect a stereo set to a computer.  Alternatively, features or elements originally associated with one concept might be added in memory to a different concept, just as stereo components such as the FM receiver and cd player might be wired to a computer.    Finally, an entirely new concept might be constructed in memory, combining elements similar to some elements of each input concept, just as one might purchase several entirely new stereo and computer components and connect them together in a separate room, apart from the rooms containing the original stereo and computer.   Neurologically, these connections might be accomplished in various ways, such as altering the pattern of synaptic connections between neuron groups (Calvin, 1996; Deacon, 1997; Phillips, 1997), changing the level of activation in connected neuron groups, or synchronizing the spike trains of some neuron groups with those of others (Fotheringhame & Young, 1997; Phillips, 1997).  Construction of an output space, distinct from the input spaces, would involve constructing parallel neural circuits in several parts of the brain, since language processing is usually distributed across areas associated with motor activity and various sensory modalities as well as with semantic memory (Deacon, 1997).  

Conceptual integration might take the form of links among discrete conceptual elements, just as one uses wires or cables to connect components of a stereo set, or, as the “blending” metaphor suggests, conceptual elements drawn from each might be merged, in the fashion of the “composite drawings” of a suspect created by police artists, or in the fashion of certain products in which circuits from a GPS (Global Positioning System) receiver or digital camera and circuits from a two-way radio or cellular telephone are imprinted on a single chip, actually fused into a single new product that duplicates some but not all features of the original products.  Although the actual mechanisms are unclear, this type of blending is consistent with the observation that one neuron might be involved in representing several features (Fotheringhame & Young, 1997) and with Calvin’s (1996) speculations about Darwinian processes in cognition. 

Calvin (1996) points out that neurons are bundled in minicolumns of about 100, and suggests that the spatiotemporal firing patterns of a group of neurons might represent a concept, word, or metaphor.  Based on a mixture of research, mostly in the visual cortex, Calvin argues that lateral inhibition creates a ring of depressed activity around an activated neuron, and that once two cells or sets of cells are firing repeatedly, in a particular firing pattern, there should be a tendency to recruit another cell about 0.5 mm away, leading to a roughly hexagonal pattern of spreading activation.  These patterns may die away as excitation weakens, may be suppressed by inhibiting signals, or may be reinforced by consonant signals, according to Darwinian principles of selection.  If indeed these Darwinian principles are, as Calvin believes, responsible for the refinement and clarification of perceptions and concepts, then, it is also possible that two patterns of firing, representing distinct concepts, could form an overlay or composite (Calvin, 1996, pp. 115-120).  This process could provide a neural mechanism for Fauconnier and Turner’s (2002) model of conceptual blending.

Criticisms of Conceptual Blending Theory and Responses to Criticisms

            In response to the objection that Conceptual Blending Theory has yet to be specified with sufficient precision that it can be empirically tested, Fauconnier and Turner (1998; 2002) and their various colleagues (see for example Coulson & Matlock, 2001) have produced empirical evidence that is consistent with Conceptual Blending Theory.  However, the evidence to date still fails either to justify the complexities of Conceptual Blending Theory or to differentiate between Conceptual Blending and other theories with which it is also consistent.  The more detailed exposition of Conceptual Blending Theory in Fauconnier and Turner (2002) goes a long way toward specifying a unified theory, but it does not satisfactorily address the issues of falsification and of eliminating rival hypotheses.  To the contrary, Fauconnier and Turner (2002) dismiss the call for falsifiability by claiming a comparison to “sciences like evolutionary biology” that “are not about making falsifiable predictions regarding future events.”  They then shift the grounds of the argument, claiming that “we have already falsified existing accounts of counterfactuals by showing the centrality of counterfactuals like the Iron Lady, which such theories are on principle unable to handle” (pp. 54-56). 

While it is true that evolutionary biology and cosmology do not make predictions that are directly falsifiable, both depend on theories that do make falsifiable predictions.  Moreover, Fauconnier and Turner (2002) fail to consider other possible explanations for the Iron Lady (Margaret Thatcher for President) example in any detail and, as I will show, the Iron Lady example is readily explained by a simpler and more straightforward integration process, with none of the metaphorical baggage associated with “mental spaces.”  A test of whether Fauconnier and Turner’s account makes predictions about actual human behavior that are uniquely supported by empirical evidence is still lacking, and reliance on the “space” and “blending” metaphors seems to work against the kind of precise specification that will support meaningful empirical tests.

            Computability.  A potential criticism of any model of language is based on whether it specifies operations that can be carried out in such a way that they will actually produce results consistent with the model (“computability”).  Veale and O’Donoghue (2000) demonstrate that Fauconnier and Turner’s (1998) model is “computable,” in the sense that a computer program based on it produces results consistent with the model.  However, the primary sense in which a model must be computational is that any model of human language use must be computable by a human brain.  As Veale and O’Donaghue’s discussion illustrates, computational models of language are themselves often metaphorical (not merely metonymical), inasmuch as they differ in fundamental ways from how human brains process information (Winograd & Flores, 1986). 

            An example of how far afield computer modeling can take the theorist appears in the discussion of semantic networks in Veale and O’Donoghue (2000).  In Figure 1 (page 257) they diagram “semantic relations” between Netscape and Microsoft and in Figure 2 (page 258) they diagram “semantic relations” between Coke and Pepsi.  (CocaCola creates CokeCans create CokeMarket affects PepsiMarket affects PepsiCo.)  It is not clear from whence the authors obtained the data represented in these diagrams, but it seems unlikely that either diagram represents the network of associations by which these concepts are linked in a typical customer’s mind or, for that matter, even in a marketing manager’s mind.  On the other hand, a more realistic set of associations would be very difficult to represent in a diagram that would fit on a single page of an academic journal, or to program into a computer data base. 

            As Veale and O’Donoghue (2000) point out, any computational algorithm that is to run on a computer requires a clear and precise “stopping rule” (see also Sperber & Wilson, 1986), and that is part of what CIT provides.  However, it is not evident that the human brain works quite that way:  It is difficult to predict fully the extent to which an individual will elaborate on a figure of speech or on a situation, although factors such as haste, involvement, and “need for cognition” all contribute (Petty & Cacioppo, 1981).  Often a hearer will not even bother to tease out the meaning of a metaphor, especially if it is somewhat obscure; at other times, the same person may mull over the possible entailments and implications of an interesting metaphor, joke, or story for several hours (a cognitive linguist may think about a turn of phrase for many years).  It is quite possible that some students in the philosophy seminar in Fauconnier and Turner’s (1998) example will process the professor’s “debate with Kant” purely in terms of the evidence and counter-evidence, entirely ignoring the supposed debate, while others will be taken with the amusing idea of Immanuel Kant sitting there next to Professor Jones, and elaborate on the story until they quite lose the thread of the lecture.  The human brain probably has several ways to stop itself from falling into the sort of endless loop that computer programmers worry about, including boredom and distraction by a more interesting idea.  None of this is to deny the usefulness and interest of efforts like those reported by Veale and O’Donoghue (2000), but it does raise the question:  How much does this sort of modeling contribute to our understanding of human language processing?

Does Conceptual Blending apply to Simple Cognitive Processes?  According to Fauconnier (1994), the cognitive space model was originally developed in reaction to a set of formalist theories of meaning, theories that “assume that natural language semantics can be adequately studied with the tools of formal logic” (Lakoff & Sweetser, 1994, p. ix), and as such it has provided a useful bridge to a more cognitive model of language.  Harder (2003) applauds the conceptual space approach because “previously disparate properties can be brought to co-exist in the same mental space with properties that were found in neither of the original spaces” (p. 91), but objects to attempts to extend it from complex to simple cognitive processes. 

Harder (2003) gives the example of fake gun, claiming that, “unless it is a gun in one space, it cannot be called a fake in another” (p. 91).  But this seems disingenuous – an extended index finger is not a gun in any space, but is frequently used as an iconic representation of a gun, and occasionally, hidden in a coat pocket, serves (at least in the movies) to convince a victim that a real gun is present.  Nor is it apparent why “fake” would require a treatment different from “black,” which, according to Harder, does not require blending from separate conceptual spaces. Both are qualifiers:  Black adds information about the color of the metal, fake adds information about constraints on the object’s affordances.  In addition to descriptors of color, shape, and material (black gun, snub-nose gun, pearl-handled gun, plastic gun), there is a continuum of descriptors that specify operational characteristics of the object (single-shot gun, dart gun, starter’s gun, flare gun, cap gun, water gun, toy gun, and fake gun).  Some descriptors (handgun, shotgun, machine gun) specify both physical appearance and operational characteristics.

The continuum between real and fake is similar to the continuum of metaphoricity (Ritchie, 2003) and raises similar issues. In the case of a toy gun, the expectation of being treated “as if” it were a real gun is created by the contained reality (or layer) of play; in the case of a fake gun used in a robbery, it is created by the constructed reality (or layer) of deception (cf. Clark, 1996).  A bb gun can be used to kill small animals, and it can occasionally be deadly to humans – is it a real gun or a toy gun?  If black gun does not require blending from two separate spaces, then it is difficult to see exactly where along this continuum blending would begin to be required.  By contrast, if conceptual integration is accomplished through a simpler process of altering the strength of neural connections, all that need happen as we progress along the real-fake or literal-metaphorical continuum is that the neural connections become denser, more extended, and more complex.

            Harder’s (2003) objection to conceptual blending theory is primarily one of scope:  he protests that blending cannot be the explanation for simpler processes of grammar.  To support his objection, he cites evidence from experiments showing that children begin to distinguish between the actual color of a green cat and the apparent color of a green cat, covered by a red filter so as to look black, only at about age four – well after they have learned basic syntax.  From this, he infers that children learn syntax before they learn to distinguish the separate conceptual spaces occupied by the actual green image of the cat and the perceptually altered image perceived through the filter.  But very young children seem quite capable of distinguishing between a doll (toy baby) and a real baby, a stuffed animal (toy puppy) and a real animal, etc., even as they construct elaborate play scenarios and carry on apparent conversations with these toys.  The problem in the “green cat” experiments may be with the unfamiliarity and artificiality of the experimental task.  This is not to deny that cognitive tasks differ in the complexity of conceptual integration they require:  To the contrary, I would suggest that they range continuously from connections so simple as to escape notice (integrating, or “binding” a color such as black, along with other sensory perceptions such as weight and hardness, with an object such as a gun) through to connections so complex that we marvel at them (e.g., the monk puzzle cited by Fauconnier & Turner, [2002], Alice in Wonderland [Carroll, 1923], and imaginary numbers). 

My own questions have to do with whether the full conceptual apparatus, including at least four distinct spaces for each instance of conceptual integration, and many more for complex operations that require long sequences of integrations, is necessary for conceptual integration.  I will suggest that many of these elements derive, not from the requirements of neurologically embodied cognitive processes, but from the entailments of the focal metaphors, “spaces” and “blending. 

Metametaphors of Cognitive Processes

            Fauconnier and Turner (1998; 2002) use a variety of metaphors to describe conceptual integration, and some of these have conflicting entailments.  “Mental spaces” might be interpreted as a metaphorical connection to a model such as multi-dimensional scaling, or a vector model such as that of Kintsch (1998; Kintsch & Bowles, 2002), in which concepts, perceptions, motor activities, etc. are understood as “vectors,” meanings are understood as “dimensions” in “semantic space,” and similarity, connectedness, and co-activation between two concepts are understood in terms of “proximity in cognitive space.   Kintsch (1998) links his vector model of language processing to the strength of connections among the neural representations of cognitive elements; the dimensions of a vector associated with a word, phrase, or concept correspond in some way to the strength of the neural connections to hundreds of other words, phrases, and concepts, and, in principle, to embodied experience (perception, motor action, etc.).  However, because Kintsch has been unable to operationalize linkages to non-language experiences, Kintsch’s model remains ungrounded (Gibbs, 1999).  

Fauconnier and Turner also link their model explicitly to a network or connectionist model:  “In the neural interpretation of these cognitive processes, mental spaces are sets of activated neuronal assemblies, and the lines between elements correspond to coactivation-bindings of a certain kind” (2002, p. 40).  However, they do not explain how the elements of their model might be realized in connections among neuronal assemblies, and much of their discussion is inconsistent with a network model. 

In the same section, Fauconnier and Turner also link the idea of mental spaces to “long-term schematic knowledge called ‘frames’” (2002, p. 40) and, by implication, to the kind of organized networks of general knowledge about the world often referred to as “schemas” and “scripts.”  However, “conceptual packets” and “mental assemblies,” along with Fauconnier and Turner’s use of circular diagrams to illustrate mental spaces and their description of the process through which relevant “contents” of two “input spaces” are copied into a separate “blended mental space,” suggests more of a “conduit” or “container” metaphor (Reddy, 1993), in which meanings are conceptualized as objects that can be “put into” words and phrases and “conveyed” to a reader or listener who “gets the meaning out of them. 

Although Fauconnier and Turner (2002) state that conceptual integration is related to a network model of language processing, and attempt to link their model to neural processes, metaphors such as “space,” “packets,” and “blending” work against a network or connectionist understanding of language.  The creation of an entirely new “blended space,” which “contains” the relevant concepts duplicated from the “input spaces” and “mapped onto” the blended space, is a direct entailment of the “space” metaphor, as is the positing of a “generic space” as the input-connecting principle, rather than a process of selection and suppression in working memory (Gernsbacher et al., 2001; Kintsch, 1998).  The use of circles and boxes in illustrations of the model (see Figure 1) reinforces the idea of “boundaries” separating the various conceptual elements and the need for replication of elements within a separate space rather than connection of existing elements in a new composite pattern, as would be entailed by a “network” metaphor.   

All of this implies that, when concepts are integrated during language comprehension, the patterns associated with each concept (however they are represented in the brain) must be duplicated in a new pattern with its own independent representation.  But consider (1):

(1) He used what he thought was a fake gun in the holdup but it turned out that it really was a gun and the clerk behind the counter was an under-cover police officer so he was charged with armed robbery and assaulting a police officer.

Interpreting a complex narrative such as (1) would require creation of a long sequence of entirely separate spaces (representations), each fully duplicating the relevant features of the preceding spaces.  Unless these independent mental spaces are dissolved as new ones are generated, the load on cognitive capacity must expand quite rapidly.

By contrast, a schema-driven connectionist model might posit that a culturally-learned robbery schema (embodied as a particular pattern of neural connections) is activated, then altered as the narrative progresses by adding or changing connections with other schemas (e.g., for fake gun and under-cover policeman) and finally with a criminal trial schema.  These changes in the connections are temporary during any particular narrative, but can be strengthened with repeated use until they become part of the basic schema and are, in effect, lexicalized – as undercover policeman probably is for many crime fiction buffs.  The activation and deactivation of synaptic connections required by a connectionist model seems in principle simpler than the re-allocation of entire groups of neurons entailed by the “cognitive space” metaphor. 

            The “space” metaphor is particularly troublesome in an example such as the monk puzzle, because the monk in the story actually moves through a single physical space, but the narrative posits two distinct locations in space-time, thus creating an apparent, but unnecessary, paradox.  The extension of physical space (the mountain with its path) to “conceptual generic space” (the elements common to the two halves of the story) seems natural and unproblematic.  Once this connection is made, it seems to follow that the two journeys in physical space can be compared only by matching the shared elements in the “generic space” (the cognitive representation of the mountain and the path leading up the mountain), and merging the two “conceptual spaces,” including relevant elements unique to each of the “input conceptual spaces” (the representation of the monk traveling up the physical space of the mountain and the representation of the monk traveling down the physical space of the mountain).  But if the separate conceptual spaces are merged into a new space, the previous spaces are lost, so the “space” metaphor requires that an entirely new blended space be created, duplicating information from the input spaces. 

All of this is a consequence of conflating physical space (the mountain) with conceptual “space,” and does not necessarily have anything to do with actual neurological processes or events in the puzzle-solver’s brain.  Only by stripping the theoretical account of its metaphorical language can we see that processing the puzzle need not involve entirely separate sets of “activated neuronal assemblies” (Fauconnier & Turner, 2002, p. 40) but may require no more than creating new linkages or changing the activation levels of existing linkages within and between the pre-existing activated neuronal assemblies for the input concepts.  

The “space” metaphor also contributes to confusion about levels of analysis.  Fauconnier and Turner define “mental space” in terms of “activated neuron assemblies.”  But they also define it as a social and cultural phenomenon, for example, “In cultural practices, the culture may already have run a blend to a great level of specificity for specific inputs, so that the entire integration network is available, with all of its projections and elaborations” (2002, p. 72).  Even aside from the question of whether a “blend” is a cognitive or a cultural phenomenon, it is not at all clear what it would mean for a culture to “run a blend” or to make available an “entire integration network.”  The phrase suggests a computer metaphor, e.g., “blended space is computer software,” but it is not clear how the blended space / software would actually be “run” in an individual mind, much less in a culture. 

 It appears here that Fauconnier and Turner (2002) are attempting to achieve much the same objective as “meme” theorists, namely, a unitary account of the cultural transmission of ideas (Dawkins, 1993; Blackmore, 1999; for critical appraisals see Aunger, 2000; Kuper, 2000; Sperber, 2000).  Intuitively we know that intricate conceptual combinations are learned from others:  Difficult as students may find it to learn calculus, for example, it was much more difficult to invent it.  But simply positing a concept such as a “blended space” that crosses the individual / social levels of analysis scarcely provides an explanation of cultural transmission, any more than it does to posit a gene-like unit of meaning and call it a “meme.”

Alternative Interpretations. 

Many of the examples provided by Fauconnier and Turner (2002) rely on a needlessly awkward or idiosyncratic interpretation of figurative language.  It is useful to consider alternative viable interpretations of these examples (Ritchie, 2003), both to see whether the explanatory apparatus of conceptual blending is still required and to understand how the underlying logic of the model works in practice. 

The Monk Climbing the Mountain Revisited.  For example, a more straightforward solution to the riddle of the monk is to draw a graph with the monk’s elevation (or distance from home) on one axis and the time of day on the other, and plot the two journeys on the graph.  The completed graph will look like a rectangle, with the journeys represented by diagonals (not necessarily straight).  No matter how the two diagonals are drawn, they must intersect.  There is no need for a fanciful narrative in which the monk “meets himself midway,” and the troublesome ambiguity of the concept, “space” (“physical space,” “conceptual space,” or “mathematical space”) is avoided – along with the question of whether the monk has to step aside to let himself pass. 

The same result can be achieved within a narrative frame by adding a qualifying marker to part of the narrative.  Consider,

 (2)  “When I toured the battlefield at Gettysburg I was haunted by all the soldiers who died there.”

In (2) the hearer easily distinguishes between the speaker’s visit to Gettysburg, represented as real, and the “ghosts of dead soldiers,” represented as metaphorical, within a single narrative; there is no apparent reason why distinguishing between factual and fanciful requires either speaker or listener to construct a separate “space.”  Similarly, on the way back down the mountain, we can readily imagine that at some point the monk might remark to himself,

(3)  “At this hour a few days ago, I was passing this very same rock.”

In (3) both speaker and hearer easily distinguish between the monk’s current location, represented as “present” and the monk’s prior location, represented as “remembered.”  All that is needed is a connection to the monk’s memory of the journey up the mountain, with a marker to distinguish present event from remembered event.  (An actual example of such a marked connection between two journeys is provided by Krakauer [1998], when he observes, at one point in his flight to Nepal, that the airplane is actually cruising at an altitude below that of the top of Mt. Everest.)  To bring these two solutions together, we can also imagine the monk, as he journeys back toward his cell, recalling the time that he passed each landmark along the trail on the way up the mountain:  “Four p.m., three-thirty p.m., three p.m.,” and so on, until he utters a time that matches the readout of his digital watch. 

It is not clear what is meant by a “blended space,” as distinct from merely adding or strengthening links between two neuronal networks (or between two parts of the same network), or what construction of a third, independent network would add to the comprehension process.  Nor is it clear what is meant by a “generic space.”  At first glance, it seems clear enough in the “monk climbing a mountain” example, but as I pointed out in the discussion of metaphoric entailments, the apparent “obviousness” of the need for a “generic space” is at least in part a result of confounding the metaphor, “mental space” with the literal space occupied by the monk on the mountain.  Compare this act of conceptual integration with the act that occurs when an object is described:  “The gun is black” connects the perceptual concept, “black,” with the object concept, “gun.”  There is no reason to suppose that the concepts, “black” and “gun,” need to have any elements in common as a basis for integrating them.  Nor is there any reason to think that connecting such elements leads to the creation of a new representation that is totally independent of the discrete concepts (a blended space).  Studies of brain activation during language processing show that processing a passage in which information from multiple sensory modalities is integrated entails activation of separate areas of the brain associated with each modality (Deacon, 1997).  To be sure, it is as yet unknown how the information from these separate areas is bound together (Phillips, 1997), but there is no reason to think that such binding requires the replication of conceptual structure from each in a novel structure.  Given the ability of most humans to spin out fanciful narratives, and to construct complex logical arguments, the creation of an entirely new “blended space” for each act of conceptual integration would multiply conceptual representations in the brain to the point that memory capacity would quickly be exhausted. 

            Margaret Thatcher for President Revisited.  According to Fauconnier and Turner’s analysis, a blended space is required to process the counter-factual elements of the scenario implied by the quip about Margaret Thatcher – Thatcher is not a U.S. citizen, hence could not legally be elected President.  But it is not evident that the counter-factual elements need enter into the comprehension process at all.  It is simpler and more straightforward to view this as a case of metonymy, in which “Margaret Thatcher” stands metonymically for a set of qualities associated with the person of that name (cf. Glucksberg & McGlone [1999], “Cambodia has become Vietnam’s Vietnam”). 

            Consider how the conversation among a group of politically conservative activists might go, should one of them draw attention to the counter-factual nature of the quip:

(4)        “What this country needs is a Margaret Thatcher – but she could never be elected here because the unions can’t stand her.”

(5)        “But Margaret Thatcher could never be elected in the United States anyway, because she isn’t a U.S. citizen.”

Sentence (5) would be regarded as either deliberate sabotage of the conversation or evidence of political naivete (Grice, 1975; Sperber & Wilson, 1986).  Similarly, in

(6)        “I’m going to write in Homer Simpson for president.”  

(7)        “But a cartoon character can’t be elected president!” 

Sentence (7) would be regarded as evidence either of a desire to sabotage the conversation, or of a total lack of humor. 

On the face of it, (4) simply posits that certain characteristics associated with Margaret Thatcher would fit well with the role requirements of President of the United States.  Likewise, (6) invites the hearer to take certain characteristics associated with the (fictional) person of Homer Simpson and associate them with the role of President of the United States.  Consider the “Pat Paulsen for President” campaign on the old Smothers Brothers TV program, beginning in 1968.  A native-born U.S. Citizen, Paulsen met the constitutional requirements for the office, so there was nothing counterfactual about the scenario.  Yet Paulsen’s actual declared candidacy was no more serious than the imagined candidacy of Margaret Thatcher.  Neither proposal was intended to be taken seriously; both proposals were intended as a commentary on the state of U.S. politics.  Although mention of a familiar concept (like Margaret Thatcher or Pat Paulsen) will ordinarily lead to increased activation of multiple links to other information (such as Thatcher’s nationality and her famous handbag, or Paulsen’s droll political commentaries), all but the relevant links will be suppressed by the contextual information in short-term memory before they rise to consciousness (Deacon, 1997; Gernsbacher et al., 2001; Kintsch, 1998).  The constitutional barriers to Thatcher’s election and the practical barriers to Paulsen’s election are unlikely to be processed unless they are relevant to the conversation.  There is no need to posit the existence of four distinct representations or “mental spaces.”  

People are capable of entertaining elaborately counterfactual scenarios, and sometimes the counterfactual nature of the scenario is part of the fun – as with the candidacy of Pat Paulsen or Homer Simpson.  Even for well-elaborated imaginative scenarios, however, all that is needed is a process in which relevant knowledge is activated in short-term memory, irrelevant connections suppressed, and new connections are made or strengthened among relevant aspects of simultaneously-activated knowledge networks.  These new connections then remain active in working memory for a time, and influence the way subsequent information is processed (Deacon, 1997; Gernsbacher et al., 2001; Kintsch, 1998). 

Example:  Digging your own Grave.  Another useful example is the familiar idiom, “You are digging your own grave.”  Fauconnier and Turner (2002) claim that this metaphorical expression requires a more complex interpretation (a “double scope network”), because digging a grave does not cause one’s death, and in any event one’s grave is ordinarily dug by someone else.  They claim that the metaphor combines the “causal, intentional, and internal event structure” of an “unwitting failure” scenario with “the concrete structure of graves, digging, and burial,” from a ‘digging the grave’” scenario (pp. 132-133).  But they may simply have misconstrued the metaphor.

“Digging your own grave” is related to other familiar expressions, including “digging yourself into a hole,” “digging it deeper and deeper,” “getting in over your head,” and “getting in too deep.”  While it is true that people rarely knowingly dig their own graves, with the rare occasion of prisoners in a totalitarian regime (Fauconnier & Turner, 2002, p. 132), and that the mere act of having a grave ready does not cause death, it is also true that a hole dug for another purpose can spectacularly – and unintentionally – become the digger’s grave, as in the case of mine and tunnel cave-ins.  Such events happen all too frequently and, when they happen, they remain salient in the news for days or even weeks, and sometimes become memorialized in music or folk-lore (Dickens, 1972; Seeger, 1958).  The example of an offspring being warned about risky financial ventures (Fauconnier & Turner, 2002, p. 131-133) seems directly related to this sort of disaster.  Through his risky investments, the offspring is metaphorically “digging a hole for himself” which, although intended to become a “gold mine” may instead prove to be his “grave.”  Closely related metaphors include “his plans fell down around him,” “he was buried under a mountain of debt,” and the poignant title to Friedman’s (1992) biography of Janis Joplin, “Buried Alive.” 

Digging in pursuit of wealth is a commonplace metaphor in our society, and death or near death by cave-ins is sufficiently frequent and sufficiently notorious as to be readily accessible in the long-term memory of most people.  Somewhat less spectacularly, it is relatively easy to dig a hole sufficiently deep that it is difficult to climb out of without external assistance.  Comprehension of “digging yourself into a hole” or “digging your own grave” is easily explained in terms of contextual activation and suppression of common knowledge by elements present in short-term working memory.  Much of this common knowledge has become more or less lexicalized through repeated reference:  You’re digging yourself into a hole” may or may not evoke a visual image of someone standing in a pit and deepening it by throwing even more dirt out of the bottom, and “stuck in a rut” may or may not evoke a visual image of a person driving a vehicle down a muddy and deeply rutted road. 

A Different Approach to Conceptual Integration.

            Analysis of the “digging” metaphors suggests a different approach to conceptual integration, at least with respect to figurative language.  Digging in the ground in search of wealth and the associated risks of cave-ins and other disaster has become closely associated with the immoderate pursuit of wealth as well as with self-generated risk.  More mundanely, people also, at one time, routinely dug wells, basements, and root cellars – and sometimes experienced difficulty getting out of a hole they had dug.  Thus, digging became associated in memory with abstract concepts including the search for wealth, danger from unexpected events, and the irony of committing actions (digging) that led to later difficulties (requiring help to climb out of a deep hole).  These scenarios were sufficiently common, either in direct experience or vicariously through the media of newspapers, entertainment media, and so forth, and they aroused sufficiently common emotions and expectations, that they provided a ready expression for more abstract ideas and experiences that arouse similar emotions and expectations.  

            The vivid images of digging (a root cellar or a coal mine) become connected with the expectations, motor actions, perceptions, and emotions associated with, respectively, ironic self-entrapment and sudden, catastrophic death.  This connection, strengthened through repeated exposure, becomes readily available to express the humiliation of speaking with excessive candor or needing help to overcome a setback (“You’re digging yourself into a hole”) or the expectation of catastrophic loss associated with a risky venture (“You’re digging your own grave.”)  As the metaphorical associations are used, the direct neurological connections between the semantic expressions, the social situations to which they apply, and the expectations and emotions they evoke become increasingly strong, until the expressions become lexicalized, and the vivid imagery that lent power to these metaphors in the first place may fade away.  Often, the images that underlie metaphorical idioms become disassociated because the experience on which they are based becomes uncommon.  It is unlikely that many people in modern society have driven a vehicle down a muddy and deeply rutted road, making it all but impossible to turn the vehicle:  For many people, “in a rut” may have entirely lost its metaphorical grounding, and thus become open to alternative and even opposite interpretations, in a way similar to the metaphorical idioms studied by Keysar and Bly (1999). 

            Similarly, “digging your own grave” may have become largely lexicalized, to the point that people who use this idiom may not think of, or may not even be aware of, the connection to hazardous occupations such as mining, well-digging, and tunnel construction.  There is an entire set of idioms associated with “digging” and related metaphors, and it seems likely that the metaphors in this group, like some of the conceptual metaphors discussed by Lakoff and Johnson (1980), are at least loosely associated with each other (Ritchie, 2003).  To the extent that these expressions have become lexicalized and have lost their grounding, the association is likely to be rather weak, but the remaining expectations, emotions, perceptions, etc. that are associated or linked to concepts such as digging, hole, and grave preserve the power of these expressions to evoke strong images and emotional responses.   That is not to say that metaphorical idioms necessarily implicate singular underlying conceptual metaphors:  As Vervaeke and Kennedy (1996) have pointed out, almost any metaphorical expression supports a wide variety of interpretations.  For example, “the grass is always greener on the other side of the fence” can be interpreted in terms of cattle in a fenced pasture or suburbanites comparing their neighbor’s lawn to their own (Ritchie, 2003).  Even “stuck in a rut” might conceivably be interpreted in terms of the reproductive biology of elk and other big game animals, which are often unable even to eat during mating season. 

            Interpretation of metaphors and idioms, like the other examples cited by Fauconnier and Turner (2002), certainly involves integrating concepts in some way, and Fauconnier and Turner provide a wealth of useful insights about this process.  Moreover, although I have argued that many of their examples can be analyzed more straightforwardly, it is nonetheless true that humans are capable of spinning elaborate counterfactual tales, such as the scenario of a monk “meeting himself” on a trip down a mountain, a modern yacht “racing with the ghost of a 19th century clipper” or a philosophy professor “engaging in an argument with Kant.”  My father has been dead for over fifteen years, but I can easily imagine a conversation with him about Iraq or about the direction my career has taken since his death.  All of these feats of imagination require the integration of complex narratives and scenarios in a way that defies everyday logic, and much of Fauconnier and Turner’s account of conceptual integration is helpful in seeing how we accomplish them.  However, the “space” and “blending” metaphors introduce entailments that are in at least some cases misleading, and that needlessly complicate the model.  Metaphors based on “networks” or “webs” might also lead to problematic entailments, but they would at least require fewer unsupported assumptions about the actual functioning of synaptic connections in the brain. 

Concluding Remarks

Conceptual Integration Theory (Fauconnier & Turner, 1998; 2002) provides a potentially powerful model for explaining a wide range of cognitive processes that build connections among different sorts of ideas – such as a monk climbing up a mountain, the same monk descending a mountain, and two lines on a graph.  However, the authors’ reliance on the metaphorical language of “space” and “blending” obscures as much of the underlying logic of the process as it illuminates, and leads to entailments that are ambiguous and even contradictory, such as the entailment that conceptual integration requires the presence of a “generic space” independent of the two inputs, and leads to creation of a separate “blended space.”  For many of the examples analyzed by Fauconnier and Turner (2002), a simpler and more straightforward analysis seems sufficient.  No independent conceptual structure (“blended space”) is needed.  Something like a “generic space” may play a role in some instances, but sometimes it involves elements common to the inputs (as it does in Fauconnier and Turner’s solution to the monk puzzle), and at other times it involves other connecting principles.  For example, in the graphic solution to the monk puzzle the monk’s journeys on the mountain must be mapped onto two lines on a graph.  The connecting principle is better described as a learned schema or convention for spatial representation of sequential events, and is not well captured by the metaphor of a “generic space.” 

The metaphorical language used by Fauconnier and Turner (1998; 2002) may help to facilitate initial understanding of the theory, but it may also lead the novice astray.  What is necessary is to recognize the limitations and pitfalls of such metaphorical language, and to elaborate the theory without the irrelevant and even contradictory entailments of the metaphors.  Reconceptualized without the entailments of the “space” and “blending” metaphors and their entailments, Conceptual Integration Theory promises to contribute to major advances in our understanding of language processing, and of metaphor in particular.  A formulation of the theory in terms of neural connections or other physiologically realizable mechanisms would help provide a basis for specifying falsifiable hypotheses, and for linking the model to the work of other cognitive theorists and to research on neurological processes in the human brain. 

            The Hazards of Evocative Metaphors.  A similar criticism can be leveled against the metaphorical entailments of Kintsch’s (1998) connectionist model of language learning and processing.  It appears from a close reading of the text that Kintsch originally introduced a vector representation of concept meanings as a way of metaphorically representing the strength of synaptic connections among neurons or neuron groups in computable form.  However, in later sections of the same book, and in other writings (Kintsch, 2000; Landauer & Dumais, 1997), Kintsch and his colleagues seem to reify the vector representation by treating it as a more or less literal model of word meanings. 

Hutchins (1995) has sharply criticized the entire “computer” metaphor for cognition, arguing that it has led cognitive scientists to focus on an artificially constrained model of information processing that disregards the crucial contributions of social and cultural structure to “cognition in the wild.”  We are still a long way from understanding how complex concepts are represented in human memory, and consequently from understanding how complex concepts are combined and applied in processing discourse, figurative or otherwise.  Until we have detailed knowledge of these processes at the neuronal level, we will almost certainly have to rely on evocative metaphors to help us understand and reason about cognitive processes.  Rigorous criticism of our own metaphors will be required if we are not to become enmeshed in their unintended but seductive entailments. 


 Bibliography

            Aunger, R. (2000).  Conclusion.  In R. Aunger, ed., Darwinizing culture:  The status of memetics as a science (pp. 205-232).  Oxford, England:  Oxford University Press. 

            Blackmore, S.  (1999).  The meme machine.  Oxford, England:  Oxford University Press. 

            Calvin, W. H.. (1996).  The cerebral code:  Thinking a thought in the mosaics of the mind.  Cambridge, MA:  MIT Press. 

Carroll, L. (1923).   Alice in Wonderland.  NYC, NY: Holt, Rinehart, and Winston.

Clark, H. H. (1996).  Using language.  Cambridge, England:  Cambridge University Press.

            Coulson, S., and Matlock, T. (2001).  Metaphor and the space structuring model.  Metaphor and Symbol, 16(3&4), 295-316.

            Dawkins, R. (1993).  Viruses of the mind.  In B. Dahlbom, ed., Dennett and his critics:  Demystifying mind (pp. 13-27).  Oxford, England:  Blackwell. 

            Deacon, T. W. (1997).  The symbolic species:  The co-evolution of language and the brain.  NYC, NY:  W.W. Norton. 

            Dickens, H.  (1972).  Disaster at the Mannington Mine.  Boston, MA:  Rounder.

            Fauconnier, G. (1994).  Mental spaces:  Aspects of meaning construction in natural language.  Cambridge, England:  Cambridge University Press.

            Fauconnier, G., and Turner, M. (1998).  Conceptual integration networks.  Cognitive Science, 22(2), 133-187. 

            Fauconnier, G., and Turner, M. (2002).  The way we think:  Conceptual blending and the mind’s hidden complexities.  New York, NY:  Basic Books. 

            Fothringhame, D. K., and Young, M. P. (1997).  Neural coding schemas for sensory representation:  Theoretical proposals and empirical evidence.  In Rugg, M. D., Ed. Cognitive Neuroscience, pp. 47-76.  Cambridge, MA:  MIT.

            Friedman, M.  (1992).  Buried alive:  The biography of Janis Joplin.  Boston, MA:  Harmony Books. 

            Gernsbacher, M. A., Keysar, B., Robertson, R. W., and Werner, N. K. (2001).  The role of suppression and enhancement in understanding metaphors.  Journal of Memory and Language, 45, 433-450.    

            Gibbs, R. W., Jr. (1999).  Propositions in mind:  A paradigm in practice.  APA Review of Books, 44, 310-312.

            Gibbs, R. W., Jr. (2000).  Making good psychology out of blending theory.  Cognitive Linguistics, 11, 347-358. 

            Gibbs, R. W., Jr. (2001).  Evaluating contemporary models of figurative language understanding.  Metaphor and Symbol, 16(3&4), 317-333. 

            Glucksberg, S. and McGlone, M. S.. (1999).  When love is not a journey:  What metaphors mean.  Journal of Pragmatics, 31, 1541-1558.

            Grady, J.  (2000).  Cognitive mechanisms of conceptual integration.  Cognitive Linguistics, 11, 335-345. 

            Grice, H. P. (1975).  Logic and conversation. In P. Cole and J.L. Morgan, (Eds.), Syntax and Semantics, (vol. 3):  Speech Acts (pp. 41-58). New York: Academic Press.

            Harder, P. (2003).  Mental spaces:  Exactly when do we need them?  Cognitive Linguistics, 14, 91-96. 

            Hutchins, E. (1995).  Cognition in the wild.  Cambridge, MA:  MIT Press. 

            Keysar, B., and Bly, B. M. (1999).  Swimming against the current:  Do idioms reflect conceptual structure?  Journal of Pragmatics, 31, 1559-1578. 

            Kintsch, W. (1998).  Comprehension:  A paradigm for cognition.  Cambridge, England:  Cambridge University Press. 

            Kintsch, W. (2000).  Metaphor comprehension:  A computational theory.  Psychonomic Bulletin & Review, 7, 257-266.

            Kintsch, W., and Bowles, A. R. (2002).  Metaphor comprehension:  What makes a metaphor difficult to understand?  Metaphor and symbol, 17, 249-262.

            Krakauer, J. (1998).  Into thin air:  A personal account of the Mount Everest Disaster.  New York, NY:  Villard.

Kuper, A. (2000).  If memes are the answer, what is the question?  In R. Aunger, ed., Darwinizing culture:  The status of memetics as a science (pp. 176-188).  Oxford, England:  Oxford University Press. 

Lakoff, G., and Johnson, M. (1980).  Metaphors we live by.  Chicago, IL:  University of Chicago Press.

            Lakoff, G., and Sweetser, E. (1994).  Forward to Fauconnier, G., Mental spaces:  Aspects of meaning construction in natural language.  Cambridge, England:  Cambridge University Press.

            Landauer, T. K., and Dumais, S. T. (1997).  A solution to Plato’s problem:  The latent semantic analysis theory of acquisition induction, and representation of knowledge.  Psychological Review, 104, 211-240. 

            Petty, R. E., and Cacioppo, J. T. (1981).  Attitudes and persuasion--classic and contemporary approaches. Dubuque, Iowa : W.C. Brown Co. Publishers.

            Phillips, W. A. (1997).  Theories of cortical computation.  In Rugg, M. D., Ed. Cognitive Neuroscience, pp. 11-46.  Cambridge, MA:  MIT.

            Reddy, M. J. (1993).  The conduit metaphor:  A case of frame conflict in our language about language.  In Ortony, A. (ed.), Metaphor and Thought, 2nd Ed, pp. 164-201.  Cambridge, England:  Cambridge University Press. 

            Ritchie, L. D. (2003).  ARGUMENT IS WAR” – Or is it a game of chess?  Multiple meanings in the analysis of implicit metaphors.  Metaphor and Symbol, 18, 125-146.

            Seeger, P.  (1958).  Springhill Mine disaster.  Bethlehem, PA:  Sing Out Music.

Sperber, D.  (2000).  An objection to the memetic approach to culture.  In R. Aunger, ed., Darwinizing culture:  The status of memetics as a science (pp. 163-174).  Oxford, England:  Oxford University Press. 

            Sperber, D., and Wilson, D. (1986).  Relevance:  Communication and cognition.  Cambridge, MA:  Harvard University Press. 

            Veale, T., and O’Donoghue, D. (2000).  Computation and Blending.  Cognitive Linguistics, 11, 253-281. 

            Vervaeke, J. and Kennedy, J. M. (1996).  Metaphors in language and thought:  Falsification and multiple meanings.  Metaphor and Symbolic Activity, 11(4), 273-284.

            Winograd, T., and Flores, F. (1986).  Understanding computers and cognition : a new foundation for design.  Norwood, N.J. : Ablex.



Author’s Note

            I wish to thank Ray Gibbs, Eric Jensen, and an anonymous reviewer for their insightful and helpful comments on earlier drafts of this essay.  All remaining mistakes and omissions are, of course, entirely my own responsibility.