History of the Bot
Many people think that bots emerged only recently, in the wake of the incredibly rapid uptake of smartphones and social media. In fact, although they emerged into mainstream consciousness relatively recently, bots are nearly as old as computers themselves, with their roots going back to the 1960s. However, it is difficult to trace the history of the bot, because there is no standard, universally accepted definition for what exactly a bot is. Indeed, bot designers themselves often don’t agree on this question. We’ll begin this history by discussing some of the first autonomous programs, called daemons, and with the birth of the world’s most famous chatbot in the late 1960s.
Early bots – Daemons and ELIZA
Daemons, or background processes that keep computers running and perform vital tasks, were one of the first forms of autonomous computer programs to emerge. In 1963, MIT Professor Fernando Corbato conceived of daemons as a way to save himself and his students time and effort using their shared computer, the IBM 7094. While it is debatable whether these programs count as bots (it depends on how you define bot), their autonomy makes them noteworthy as a precursor to more advanced bots (McKelvey, 2018).
A more recognizable bot emerged only three years later. In 1966, another MIT professor, Joseph Weizenbaum, programmed ELIZA – the world’s first (and most famous) chatbot,1 arguably “the most important chatbot dialog system in the history of the field” (Jurafsky & Martin, 2018, p. 425). ELIZA was a conversational computer program with several “scripts.” The most famous of these was the DOCTOR script, under which ELIZA imitated a therapist, conversing with users about their feelings and asking them to talk more about themselves. Using a combination of basic keyword detection, pattern matching,2 and canned responses, the chatbot would respond to users by asking for further information or by strategically changing the subject (Weizenbaum, 1966). The program was relatively simple – a mere 240 lines of code – but the response it elicited from users was profound. Many first-timers believed they were talking to a human on the other end of the terminal (Leonard, 1997, p. 52). Even after users were told that they were talking to a computer program, many simply refused to believe they weren’t talking to a human (Deryugina, 2010). At the first public demonstration of the early internet (the ARPANET) in 1971, people lined up at computer terminals for a chance to talk to ELIZA.
ELIZA captured people’s minds and imaginations. When Weizenbaum first tested out ELIZA on his secretary, she famously asked him to leave the room so they could have a more private conversation (Hall, 2019). Weizenbaum, who had originally designed the bot to show how superficial human–computer interactions were, was dismayed by the paradoxical effect.
I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it, [Weizenbaum wrote years later]. What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. (Weizenbaum, 1976, pp. 6–7)
This response was noteworthy enough to be dubbed the “ELIZA effect,” the tendency of humans to ascribe emotions or humanity to mechanical or electronic agents with which they interact (Hofstadter, 1995, p. 157).
Bots and the early internet: Infrastructural roles on Usenet
Other early bots did not have the glamor of ELIZA. For most of the 1970s and 1980s, bots largely played mundane but critical infrastructural roles in the first online environments. Bots are often cast in this “infrastructural” role,3 serving as the connective tissue in human–computer interaction (HCI). In these roles, bots often serve as an invisible intermediary between humans and computers that make everyday tasks easier. They do the boring stuff – keeping background processes running or chatrooms open – so we don’t have to. They are also used to make sense out of unordered, unmappable, or decentralized networks. As bots move through unmapped networks, taking notes along the way, they build a map (and therefore an understanding) of ever-evolving networks like the internet.
The limited, nascent online environment from the late 1970s onward was home to a number of important embryonic bots, which would form the foundation for modern ones. The early internet was mainly accessible to a limited number of academic institutions and government agencies (Ceruzzi, 2012; Isaacson, 2014, pp. 217–261), and it looked very different: it consisted of a limited number of networked computers, which could only send small amounts of data to one another. There were no graphical user interfaces (GUIs) or flashy images. For the most part, data was text-based, sent across the network for the purposes of communication using protocols – the standards and languages that computers use to exchange information with other computers. Protocols lay at the heart of inter-computer communication, both then and now. For example, a file is sent from one computer to another using a set of pre-defined instructions called the File Transfer Protocol (FTP), which requires that both the sending computer and the receiving computer understand FTP (all computers do, nowadays). Another of the most widespread and well-known protocols on the modern internet is the hypertext transfer protocol (HTTP). HTTP was first developed in 1989 by Tim Berners-Lee, who used it as the basis for developing the World Wide Web. Before HTTP and the World Wide Web became nearly universal in the 1990s, computers used different protocols to communicate with each other online,4 including Usenet and Internet Relay Chat (IRC). Both of these early online connection forums still exist today, and both played a critical role in incubating bot development. These were early breeding grounds for bot developers and their creations.
Usenet was the first largely available electronic bulletin-board service (often written simply as “BBS”). Developed in 1979 by computer-science graduate students at Duke and the University of North Carolina, Usenet was originally invented as a way for computer hobbyists to discuss Unix, a computer operating system popular among programmers. Users could connect their computers to each other via telephone lines and exchange information in dedicated forums called “news groups.” Users could also use their own computers to host, an activity known as running a “news server.” Many users both actively participated in and hosted the decentralized service, incentivizing many of them to think about how the platform worked and how it could be improved.
This environment led to the creation of some of the first online bots: automated programs that helped maintain and moderate Usenet. As Andrew Leonard describes, “Usenet’s first proto-bots were maintenance tools necessary to keep Usenet running smoothly. They were cyborg extensions for human administrators” (Leonard, 1997, p. 157). Especially in the beginning days, bots primarily played two roles: one was posting, the other was removing content (or “canceling,” as it was often called on Usenet) (Leonard, 1996). Indeed, Usenet’s “cancelbots” were arguably the first political bots. Cancelbots were a Usenet feature that enabled users to delete their own posts. If a user decided they wanted to retract something they had posted, they could flag the post with a cancelbot, a simple program that would send a message to all Usenet servers to remove the content. Richard Depew wrote the first Usenet cancelbot, known as ARMM (“Automated Retroactive Minimal Moderation”) (Leonard, 1997, p. 161).
Though the cancelbot feature was originally meant to enable posters