Unsurprisingly, Conceptual Metaphor Theory (CMT) has significant implications for the computational modeling of metaphor in language [Lakoff, 1994, Lakoff and Johnson, 1980]. CMT-inspired computational models of metaphor range from symbolic approaches that focus on high-level data structures and knowledge-representations (e.g., Martin [1990]), to cognitive and neurological approaches that focus on lower-level arrangements of neuron-like activation elements and their interconnections (e.g., Feldman [2006]). But, regardless of which view of CMT is implemented, it is not enough that an NLP system goes beyond mere lexical semantics to embrace rich conceptual representations of target and source domains. It must embrace the right kinds of conceptual representations. Most theories of metaphor that lend themselves to a computational realization are ultimately agnostic about conceptual structure. Although they stipulate that such structures are necessary, and provide the general form of these structures—whether the set-theoretic genus/species representations of Aristotle, the graph-theoretic representations of Gentner’s analogical structure-mapping approach, or the textured category representations of Glucksberg’s category inclusion model—they do not make ontological commitments to the representations of particular concepts, such as CONTAINER, PATH, ORIENTATION, FORCE, etc. CMT argues that, for any agent to make and understand metaphors like a human, it must have the same conceptual biases as a human, and anchor its model of the world in the same foundational schemas. However, while these schematic concepts have an embodied basis in human cognition, this is not to suggest that an NLP system must also be embodied in the same way. Developmental roboticists may find some value in embodying a computational realization of CMT in an anthropomorphic robot, but this is neither necessary nor useful for a metaphor-capable NLP system. Such schemas are the conceptual products of physical embodiment, and it is enough that our systems also contain representations of these schemas that afford the same kinds of conceptual inferences.
For example, the MIDAS system of Martin [1990] employs the basic schematic structures of CMT to anchor and guide its interpretation of conventional metaphors and variations thereof. A single conventional metaphor may be instantiated in a wide range of domain-specific variations, as Martin demonstrates in the context of a dialogue system for offering Unix advice. This domain offers ample opportunity for the use of physically motivated conceptual schemas to describe otherwise intangible phenomena, since, e.g., computer users conventionally conceive of software processes as living things (prompting the metaphor “How do I kill a runaway process?”) and software environments as containers (prompting the metaphor “How do I get out of Emacs?”). In another CMT-inspired approach, Veale and Keane [1992] employ the schemas of ORIENTATION and CONTAINER as semantic primitives in a lexico-conceptual representation they call a Conceptual Scaffolding. In contrast to other systems of semantic primitives, which conventionally employ such primitives as final, irreducible components of utterance meaning, these operators are intermediate placeholders that are designed to be replaced by domain-specific constructs during subsequent processes of interpretation and inference. Scaffolding operators are always replaced in this way, whether the speaker intends an utterance to be read literally or figuratively. Literal and metaphoric uses of image-schematic notions of containment (such as “falling into a manhole” vs. “falling into debt”) and orientation (such as “sliding down the hill” vs. “sliding down the polls”) will thus use precisely the same operators in precisely the same way in their respective scaffolding structures. The versatility of the CMT schemas allows the scaffolding construction process to be agnostic as to the literal or figurative status of an utterance, and pushes a decision on this distinction into the realm of inference and common-sense reasoning, where Black [1962] and Davidson [1978] each say it belongs. Alternatively, one can use CMT’s schematic structures as a cognitively grounded vocabulary for tagging linguistic metaphors in large corpora with the appropriate conceptual annotations. A large corpus of metaphors annotated in this fashion may allow a computational system to learn to recognize and categorize unseen variations on these metaphors in free text, or to, perhaps, even generate novel variations of their own.
2.7 CONCEPTUAL INTEGRATION AND BLENDING
If figurative phenomena blur the boundaries between very different concepts and domains, we should not be surprised if they also blur the boundaries between themselves. It can be quite a challenge, even for a seasoned researcher, to determine precisely what figurative mechanism is at work in any given example of non-literal language. Metonymy and metaphor are often tightly bound up together, with metonymy providing the subtle glue to seamlessly tie metaphors to their targets. Thus, in the news headlines “Chipotle shares hit by E. Coli scare” and “Nike jumps on share buyback,” the metonymies company → share and share → share_price are necessary for the respective metaphors, HITTING IS DIMINISHING and JUMPING IS INCREASING, to take hold (note also the punning humor of the verb “jump” in the context of Nike, a maker of basketball shoes; Brône and Feyaerts [2005] describe this phenomenon as double grounding). Different kinds of figurative phenomena are not always marked at the linguistic level, and so an implicit comparison, in which the source and target are not explicitly identified with each other in the same utterance, may be taken as a metaphor by some, a simile by others, or an analogy by dissenters to both of these views. Moreover, what seems like a comparison to some may be taken as a categorization by others. Consider again our earlier example from the movie Jurassic Park. In particular, consider again the quote from the park owner, John Hammond:
Hammond: All major theme parks have delays. When they opened Disneyland in 1956, nothing worked!
It makes sense here to understand Hammond’s observation as an implicit comparison between Disneyland and Jurassic Park, although the latter is not explicitly mentioned. Nothing is seemingly working in Jurassic Park, but nothing worked in 1956 at Disneyland either, yet the latter turned out to be a monumental financial and cultural success. With this statement, Hammond thus implies comparable future success for his own troubled venture. However, note the generalization that precedes this comparison: “All major theme parks have delays.” Hammond here seems to be establishing a categorization, in which Jurassic Park and Disneyland are to be placed in the same category, namely the category for which Disneyland is prototypical: a major theme park that started life inauspiciously but went on to win the hearts and wallets of America.
If Hammond’s metaphor is simultaneously both a comparison and a categorization (and not one or the other) then Malcolm’s rejoinder seems to be an entirely different species of figurative beast:
Malcolm: Yeah, but, John, if The Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.
In fact, Malcolm’s conversational gambit is best described as a conceptual blend (see Fauconnier and Turner [2002]). Not only does it operate on the same domains as Hammond’s gambit—Disneyland and Jurassic Park—it also thoroughly