What is Artificial Intelligence?
In the case of artificial intelligence, it is widely, though erroneously, assumed that its history can and ought to be mapped, measured and retold by recourse and recourse only to AI studies – and that if any of this history falls outside of the purview of the disciplines of engineering, computer science or mathematics, it might justifiably be ignored or assigned perhaps only a footnote within the canonical bent of AI studies. Such an approach, were it attempted here, would aim at reproducing the rather narrow range of interests of much in the AI field – for example, definitional problems or squabbles concerning the ‘facts of the technology’.1 What, precisely, is machine learning? How did machine learning arise? What are artificial neural networks? What are the key historical milestones in AI? What are the interconnections between AI, robotics, computer vision and speech recognition? What is natural language processing? Such definitional matters and historical facts about artificial intelligence have been admirably well rehearsed by properly schooled computer scientists and experienced engineers the world over, and detailed discussions are available to the reader elsewhere.2
As signalled in its title, this book is a study in making sense of AI, not of AI sense-making. This is not about the technical dimensions or scientific innovations of AI, but about AI in its broader social, cultural, economic, environmental and political dimensions. I am seeking to do something which no other author has attempted. While the existing literature tends to be focused on isolated scientific pioneers in the retelling of the history of AI, the present chapter concerns itself more with cultural shifts and conceptual currents. Something of the same ambition permeates the book as a whole. While much of the existing literature tends to concentrate on specific domains in relation to issues such as work and employment, racism and sexism, or surveillance and ethics, I have sought to register something of the wealth of intricate interconnections between such domains – all the way from lifestyle change and social inequalities to warfare and global pandemics such as COVID-19. In fact, I spend the bulk of my time in this book examining these multidimensional interrelationships to make up for the fact that such interconnections are not usually discussed at all in the field of AI studies. It is, in particular, the close affinity and interaction between AI technologies and complex digital systems, phenomena that in our own time are growing in impact and significance as well as in the opportunities and risks they portend, that I approach – carefully and systematically – in the chapters that follow throughout this book. Finally, while the existing literature tends to be focused on the tech sector in one country or AI industries in specific regions, I have sought to develop a global perspective and offer comparative insights. A general social theory of the interconnections between AI, complex digital systems and the coactive interactions of human–machine interfaces remains yet to be written. But in developing the synthetic approach I outline here, my hope is that this book contributes to making sense of the increasingly diverse blend of humans and machines in the field of automated intelligent agents, and to frame all this theoretically and sociologically with reflections on the dynamics of AI in general and its place in social life.
There is more than one way in which the story of AI can be told. The term ‘artificial intelligence’, as we will examine in this chapter, consists of many different conceptual strands, divergent histories and competing economic interests. One way to situate this wealth of meaning is to return to 1956, the year the term ‘artificial intelligence’ was coined. This occurred at an academic event in the USA, the Dartmouth Summer Research Project, where researchers proposed ‘to find how to make machines use language, form instructions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.3 The Dartmouth Conference was led by the American mathematician John McCarthy, along with Marvin Minsky of Harvard, Claude Shannon of Bell Telephone Laboratories and Nathan Rochester of IBM. Why the conference organizers chose to put the adjective artificial in front of intelligence is not evident from the proposal for funding to the Rockefeller Foundation. What is clear from this infamous six-week event at Dartmouth, however, is that AI was conceived as encompassing a remarkably broad range of topics – from the processing of language by computers to the simulation of human intelligence through mathematics. Simulation – a kind of copying of the natural, transferred to the realm of the artificial – was what mattered. Or, at least, this is what McCarthy and his colleagues believed, designating AI as the field in which to try to achieve the simulation of advanced human cognitive performance in particular, and the replication of the higher functions of the human brain in general.
There has been a great deal of ink spilt on seeking to reconstruct what the Dartmouth Conference organizers were hoping to accomplish, but what I wish to emphasize here is the astounding inventiveness of McCarthy and his colleagues, especially their focus on squeezing then untrained and untested variants of scientific strategies and intellectual hunches anew into the terrain of intelligence designated as artificial. Every culture lives by the creation and propagation of new meanings, and it is perhaps not surprising – at least from a sociological standpoint – that the Dartmouth organizers should have favoured the term ‘artificial’ at a time in which American society was held in thrall to all things new and shiny. The era of 1950s America was of the ‘new is better’, manufactured as opposed to natural, shiny-obsessed sort. It was arguably the dawning of ‘the artificial era’: the epoch of technological conquest and ever more sophisticated machines, designated for overcoming problems of nature. Construction of various categories and objects of the artificial was among the most acute cultural obsessions. Nature was the obvious outcast. Nature, as a phenomenon external to society, had in a certain sense come to an ‘end’ – the result of the domination of culture over nature. And, thanks to the dream of infinity of experiences to be delivered by artificial intelligence, human nature was not something just to be discarded; its augmentation through technology would be an advance, a shift to the next frontier. This was the social and historical context in which AI was ‘officially’ launched at Dartmouth. A world brimming with hope and optimism, with socially regulated redistributions away from all things natural and towards the artificial. In a curious twist, however, jump forward some sixty or seventy years and it is arguably the case that, in today’s world, the term ‘artificial intelligence’ might not have been selected at all. The terrain of the natural, the organic, the innate and the indigenous is much more ubiquitous and relentlessly advanced as a vital resource for cultural life today, and indeed things ‘artificial’ are often viewed with suspicion. The construction of the ‘artificial’ is no longer the paramount measure of socially conditioned approval and success.
Where does all of this leave AI? The field has advanced rapidly since the 1950s, but it is salutary to reflect on the recent intellectual history of artificial intelligence because that very history suggests it is not advisable to try to compress its wealth of meanings into a general definition. AI is not a monolithic theory. To demonstrate this, let’s consider some definitions of AI – selected more or less at random – currently in circulation:
1 the creation of machines or computer programs capable of activity that would be called intelligent if exhibited by human beings;
2 a complex combination of accelerating improvements in computer technology, robotics, machine learning and big data to generate autonomous systems that rival or exceed human capabilities;
3 technologically driven forms of thought that make generalizations in a timely fashion based on limited data;
4 the