Figure It Out. Stephen P. Anderson. Читать онлайн. Newlib. NEWLIB.NET

Автор: Stephen P. Anderson
Издательство: Ingram
Серия:
Жанр произведения: Зарубежная деловая литература
Год издания: 0
isbn: 9781933820958
Скачать книгу

      I’m in a room, standing in front of five other people. For the last several minutes, I’ve been trying—unsuccessfully—to explain this new idea. It is a bit of a novel idea, but that shouldn’t be a problem. I’ve got a clear explanation. My explanation uses plain language. I even draw a visual model so people can see what I’m describing. Still, no one “gets” it. Then I say: “It’s kind of like ...” Eyes light up. Heads nod. Now, everyone understands.

      What just happened?

      We’ve all been in this situation, or one like it. Pitching a startup idea. Defending a design. Advocating a particular political position. Sorting out a big-picture concept. Explaining the business model. Drafting technical schemas. Explaining that niche interest we know so much about ...

      By calling to mind an already familiar concept, we make it easier for others to understand what we’re talking about. Sometimes it’s explicit: “It’s like Pinterest for Teachers” (a product pitch) or “Think Pocahontas on an alien planet” (the movie Avatar). Other times it’s more subtle, as with the engineering team that talks about bad decisions building up “technical debt.” And other times it’s allegorical, from the parables told by Jesus to an astrophysicist describing “the Goldilocks Zone” necessary for life on other planets.

      But this runs deeper than the associations we try to evoke in others. We all—whether we’re consciously aware of it or not—make sense of any new information by likening it to some other familiar concept. To understand, we link the unknown to what is already known.

      Douglas Hofstadter, an American professor of cognitive science, writes “The human ability to make analogies lies at the root of all our concepts ... analogy is the fuel and fire of thinking.”1 These analogies are invaluable, not only for communicating with others, but also for our own understanding. Again, whether we’re aware of it or not, we all think in concepts and patterns. Sure, we can point to the consultant who uses a picture of an iceberg to explain what is seen and (more critically) unseen by the business—the concept is familiar. But it’s more than simple, explicit “A is to B as C is to D” analogies. If we look at research from George Lakoff and Mark Johnson,2 we see how many concepts are so deeply embedded in our language, culture, and thought processes that the underlying associations go unobserved. Consider the spatial associations embedded in phrases like “Cheer up!” or “You seem down in the dumps.” We use this language without pausing to consider why “up is good” and “down is bad.” And yet, if we look at how the unwatered plant droops over or how our shoulders sag and our posture droops when we’re “upset”—we have clues to a set of associations rooted in biology and widespread throughout our thought processes. We can go even deeper and suggest that all thinking is conceptual in nature. Take a word like “jazzercise,” ideas like “Republican” or “Democrat,” or phrases like “The Paris of the Middle East”—we take for granted the layers of concepts and associations that have accumulated, often over many decades, to give meaning to these words; imagine explaining these phrases to someone transported from even just a few centuries ago! Even the way we express a single word can evoke a wildly different set of concepts. Consider some different ways we might utter a simple word like please: Puh-LEASE. (please). Please! Pleeeeeaaze? In this case, it is more than the word that is uttered; we’ve built up a set of prior associations—based on tone of voice—that also contribute to the message we understand.

      Becoming aware of the conceptual systems that govern our own and others’ understanding is a powerful tool for understanding. This section is about the variety of ways that we might trigger, use with intent, and be aware of these pre-existing conceptual associations to help us and others understand new information.

      But first, why care? How much of a difference can a simple association really make on understanding and subsequent decisions? To show how being aware of these associations can affect understanding—and decision-making—let’s explore our relationship with technology. Let’s take a critical look at the literal concepts we use to orient ourselves with something that is an abstraction.

      Technology: Person, Place, or Tool?

      As a designer working with technology, one of the fundamental frames I (Stephen) struggle with is how to think about the “things” I help make. Are the digital apps and sites I’ve designed more like:

      • People with whom we interact?

      • Places where we do stuff?

      • Tools that extend our abilities?

      • Something else, altogether?

      Steve Krug, author of the book Don’t Make Me Think, suggests that technology should function like a butler, a person with whom we converse and ask to do stuff for us. When we say “Let’s check with Google” or “Ask Siri,” we’re thinking of these services like a butler. This “technology as person” frame is the one I (Stephen) opted for in my first book Seductive Interaction Design where I asked “How do we get people to fall in love with our applications?” By looking at first-time user experiences through the lens of dating, I was able to highlight all the opportunities we have to make our software more humane, desirable, and—to be honest—a little less geeky! This technology-as-person association also expands to many other areas, from personal robotic vacuums such as the Neato and Roomba to the sentient, sometimes frightening, AIs portrayed in movies like Iron Man, 2001, or Ex Machina.

      But now consider how we view something like Facebook or even the internet as a whole: our frame shifts to that of a place we visit. As author and consultant Jorge Arango comments: “We ‘go’ online. We meet with our friends ‘in’ Facebook. We visit ‘home’ pages. We log ‘in’ to our bank. If we change our mind, we can always ‘go back.’ These metaphors suggest that we subconsciously think of these experiences spatially.”3

      This frame shifts once more when we turn our attention to mobile devices, which by their physical proximity seem more like personal tools, extending our limited capabilities. We don’t talk with our phone—it’s not a person with whom we converse. We use our phone to talk to others; it’s a device we use to do things, a tool that extends our capabilities. Notebooks let us hold onto thoughts. Robotic arms let us lift more than we could otherwise. Shoes let us run farther. Mobile apps let us do more and better. But even this “mobile device as tool” frame isn’t that straightforward. If we use our phones to visit the places above, don’t they become portals to places in addition to being tools?

      These shifting associations suggest that we as humans don’t have a consistent frame for thinking about technologies. We’re all trying to use tangible terms to make sense of something fundamentally intangible. But person, place, or tool ... something else ... Why should all this matter?

      The Effect of These Different Frames for Technology

      Where this choice of technology frame shows up is certainly in detailed labeling decisions, such as when a product team building software must decide whether to label something as “My Stuff” or “Your Stuff”—the best answer depends upon this fundamental framing question, and broader brand, experience, and perhaps even legal considerations. If it’s “my stuff,” then this thing is a tool and an extension of myself—like my files in my file folder. If it’s “your stuff,” then there’s an actor or person with whom I interact, that I hand stuff over to hold things for me.

      What about hardware products? When the Neato robotic vacuum gets stuck, the error message asks us to “Please remove stuff from my path” or “Help me,” invoking the frame of a subservient cleaning bot that needs help from time to time. Even the sounds on these robots are meant to suggest something juvenile and prone to making errors—all an intentional frame designed to help us be more forgiving of what is still an early stage technology with plenty of kinks