Designing Agentive Technology. Christopher Noessel. Читать онлайн. Newlib. NEWLIB.NET

Автор: Christopher Noessel
Издательство: Ingram
Серия:
Жанр произведения: Маркетинг, PR, реклама
Год издания: 0
isbn: 9781933820705
Скачать книгу
fitting time has been found and a calendar reminder has been added to your calendar.

       Images

      The Problem of Music Playback

      Musical notation permitted the “recording” of music onto vellum, parchment, and paper, which could be played back with the manual tool of an instrument like a guitar or piano and let the human do all the decrypting work of turning those dots and lines into sound. The invention of powered tools like gramophones and record players let anyone hear a recording of a particular performance—you just had to manage collecting the music and switching out the disk yourself. CD players were also powered tools for playing, but being electronic first added metrical data like track numbers and later included artists and song titles to the display. (There are very few rules to how the user plays music, so you wouldn’t expect to see any corrective technologies, though if you looked at equalizer controls there would be certain thresholds to be managed via a spectrum analyzer.) Radio stations have for a long time had disc jockeys act as a service for selecting and broadcasting music, but more recently Pandora and Spotify are popular services with agentive aspects that let individual music listeners provide the system with a song or two they do like, and thereafter just listen.

Images

      The Problem of Search

      Search might seem like an odd example, since at first it seems just like information, but then you realize how much physical work used to be involved in finding information even in a well indexed system. Card catalogs were an early manual technology for providing search-like access to the information spread out in space in the stacks of a library. Microfiche is a powered system for reducing the amount of effort in looking through periodicals. Modern automated retrieval systems are powered tools that even bring a particular book to you on request. Metrical tools like tables of contents and indexes help you jump to particular parts of content.

      However, once information exploded on the internet, Yahoo!, Google, Bing, and its ilk made the task of searching easier, and even helped you with corrective tools when you misspelled something, or when you were using poor search terms. Did you mean . . .? When Google introduced Google Alerts, it introduced low-level agents by which users could set up topics of interest and let the information come find them.

       Images

      So that’s just three examples. I’ll cover many more in the next chapter and throughout the rest of the book. These three are, of course, cherry-picked from the vast history of technology, but should help to illustrate how these lenses are a useful way to understand the ways that various technologies have worked to reduce effort around particular human problems in categorical ways, and how very recent technologies combine these aspects into agents.

      So between the Nest Thermostat in the prior chapter and the handful just covered, you’ve seen some examples of agentive technology, but rather than relying on inference, let’s get specific about what an agent is and isn’t.

      In the simplest definition, an agent is a piece of narrow artificial intelligence that acts on behalf of its user.

      Looking at each aspect of that definition will help you understand it more fully. First, let’s take the notion of narrow intelligence, and then acting on behalf of its user.

      The Notion of Narrow Artificial Intelligence

      When most people think of AI, they think of what they see in the movies. Maybe you imagine BB-8 rolling along the sands of Jakku, werping and wooing as it trails Rey. If you know a bit of sci-fi history you might also have a black-and-white robot in mind, like Gort or Robbie, policing or protecting per their job description. Or maybe you realize that AI doesn’t need to be embodied in robot form, as with Samantha, the disembodied “operating system” OS1 in the movie Her—one minute sorting her user’s inbox, the next falling in love with and then abandoning him. Or if you have an affinity for the darker side of things, you might think of either HAL’s glowing red eye or MU/TH/R 6000’s cold, screen-green text, each AI assaulting its crew members to protect their secret missions.

      These sci-fi AIs—and, in fact, the overwhelming majority of sci-fi AIs—fall into three categories of strong artificial intelligence.

      The first is the most advanced category of strong AI, which is called artificial super intelligence, and describes an AI with capabilities advanced far beyond human capabilities, and far beyond what you can even imagine. As a bird’s intelligence is to human intelligence, a human intelligence is to ASI. As the scenario goes, if you program AGIs to evolve or make better and better copies of themselves, it will result in ever-accelerating improvements until they achieve what you can only call a godlike intelligence. Samantha from Her is a good scifi example, who by the end of the movie is accessing and contributing to the total body of human endeavor, having simultaneous conversations and relationships with users and other AIs, all while evolving to such a degree that she and the other AIs ultimately decide to leave humans behind as they sort of self-rapture to something or somewhere incomprehensible to humans.

      The second is artificial general intelligence, or AGI, so called because it displays a general or abstract problem-solving capability similar to a human intelligence. BB-8 and HAL are examples of this. They are artificial, but are fairly human in their capabilities. They’re one of the team. If/when we ever get to this, we’ll be in a categorically different place than agentive tech.

      The third category is “weak” or artificial narrow intelligence, or ANI. This is much more constrained AI, which might be fantastic at, say, searching a massive database of tens of millions of songs for a new one you’re likely to love, but is still unable to play a game of tic-tac-toe. The intelligence these systems display cannot generalize, cannot of its own accord apply what it “knows” to new categories of problems. It’s the AI that is in the world today, so familiar that you don’t think of it as AI as much as you think of it simply as smart technology.

      Whether or when we actually get to strong AGI is a matter for computer scientists, but for the purposes of design, it is immaterial. If AGI ever makes it to your Nest Thermostat, it will be making decisions about how best to use its resources to manage its task and communicate with its users, that is, to create its own interface and experience. Designers will not be specifying such systems as much as acting as consultants to the early AGIs on best practices. But until we’ve got AGI around to worry about, we have increasing numbers of examples of products and services built around ANI, and those will need good design to make them humane and useful.

      As you saw in the prior examples, narrow intelligence isn’t a binary quality. Different agents can embody different levels of intelligence. An agent can be said to be more intelligent when it has the following characteristics:

      • Its model of its domain is more reticulated and closer to our own. Anyone who has been plunged into darkness by spending “too much” time in a restroom with a motion-sensing light switch knows that it is less smart than one that could “see” when there is a human there who still needs the light.

      • It successfully monitors more—and more complex—data streams. Drebbel’s device monitored a single variable, but the Nest Thermostat monitors dozens.

      • It can make smart inferences. It can smartly infer what given data means and react accordingly. Steady weight gain over the course of the month might mean a homecare patient’s sedentary choices may be increasing their body mass index. But rapid weight gain can mean dangerous swelling in the tissues—signs of a more serious medical concern.

      •