Bots. Nick Monaco. Читать онлайн. Newlib. NEWLIB.NET

Автор: Nick Monaco
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Кинематограф, театр
Год издания: 0
isbn: 9781509543601
Скачать книгу
“bot” began as a shortened form of “robot,” in the era of the modern internet, the connotations of the two terms have diverged. Bot is now used mostly to designate software programs, most of which run online and have only a digital presence, while robots are commonly conceived of as possessing a physical presence in the form of hardware – of having some form of physical embodiment. Wired journalist Andrew Leonard writes that bots are “a software version of a mechanical robot” whose “physical manifestation is no more than the flicker of electric current through a silicon computer chip” (Leonard, 1997, pp. 7–24). Today, social bots’ implementation may involve a visual presence, such as a profile on Twitter or Facebook, but the core of their functioning lies in the human-designed code that dictates their behavior.

      Many people think that bots emerged only recently, in the wake of the incredibly rapid uptake of smartphones and social media. In fact, although they emerged into mainstream consciousness relatively recently, bots are nearly as old as computers themselves, with their roots going back to the 1960s. However, it is difficult to trace the history of the bot, because there is no standard, universally accepted definition for what exactly a bot is. Indeed, bot designers themselves often don’t agree on this question. We’ll begin this history by discussing some of the first autonomous programs, called daemons, and with the birth of the world’s most famous chatbot in the late 1960s.

      A more recognizable bot emerged only three years later. In 1966, another MIT professor, Joseph Weizenbaum, programmed ELIZA – the world’s first (and most famous) chatbot,1 arguably “the most important chatbot dialog system in the history of the field” (Jurafsky & Martin, 2018, p. 425). ELIZA was a conversational computer program with several “scripts.” The most famous of these was the DOCTOR script, under which ELIZA imitated a therapist, conversing with users about their feelings and asking them to talk more about themselves. Using a combination of basic keyword detection, pattern matching,2 and canned responses, the chatbot would respond to users by asking for further information or by strategically changing the subject (Weizenbaum, 1966). The program was relatively simple – a mere 240 lines of code – but the response it elicited from users was profound. Many first-timers believed they were talking to a human on the other end of the terminal (Leonard, 1997, p. 52). Even after users were told that they were talking to a computer program, many simply refused to believe they weren’t talking to a human (Deryugina, 2010). At the first public demonstration of the early internet (the ARPANET) in 1971, people lined up at computer terminals for a chance to talk to ELIZA.

      ELIZA captured people’s minds and imaginations. When Weizenbaum first tested out ELIZA on his secretary, she famously asked him to leave the room so they could have a more private conversation (Hall, 2019). Weizenbaum, who had originally designed the bot to show how superficial human–computer interactions were, was dismayed by the paradoxical effect.

      This response was noteworthy enough to be dubbed the “ELIZA effect,” the tendency of humans to ascribe emotions or humanity to mechanical or electronic agents with which they interact (Hofstadter, 1995, p. 157).

      Other early bots did not have the glamor of ELIZA. For most of the 1970s and 1980s, bots largely played mundane but critical infrastructural roles in the first online environments. Bots are often cast in this “infrastructural” role,3 serving as the connective tissue in human–computer interaction (HCI). In these roles, bots often serve as an invisible intermediary between humans and computers that make everyday tasks easier. They do the boring stuff – keeping background processes running or chatrooms open – so we don’t have to. They are also used to make sense out of unordered, unmappable, or decentralized networks. As bots move through unmapped networks, taking notes along the way, they build a map (and therefore an understanding) of ever-evolving networks like the internet.

      This environment led to the creation of some of the first online bots: automated programs that helped maintain and moderate Usenet. As Andrew Leonard describes, “Usenet’s first proto-bots were maintenance tools necessary to keep Usenet running smoothly. They were cyborg extensions for human administrators” (Leonard, 1997, p. 157). Especially in the beginning days, bots primarily played two roles: one was posting, the other was removing content (or “canceling,” as it was often called on Usenet) (Leonard, 1996). Indeed, Usenet’s “cancelbots” were arguably the first political bots. Cancelbots were a Usenet feature that enabled users to delete their own posts. If a user decided they wanted to retract something they had posted, they could flag the post with a cancelbot, a simple program that would send a message to all Usenet servers to remove the content. Richard Depew wrote the first Usenet cancelbot, known as ARMM (“Automated Retroactive Minimal Moderation”) (Leonard, 1997, p. 161).