The culture in the code allows machine learning algorithms to deal with the complexity of our social realities as if they truly understood meaning, or were somehow socialized. Learning machines can make a difference in the social world, and recursively adapt to its variations. As Salvatore Iaconesi and Oriana Persico noted in one of our interviews: ‘IAQOS exists, and this existence allows other people to modify themselves, as well as modify IAQOS.’ The code is in the culture too, and confounds it through techno-social interactions and algorithmic distinctions – between the relevant and the irrelevant, the similar and the different, the likely and the unlikely, the visible and the invisible. Hence, together with humans, machines actively contribute to the reproduction of the social order – that is, to the incessant drawing and redrawing of the social and symbolic boundaries that objectively and intersubjectively divide society into different, unequally powerful portions.
As I write, a large proportion of the world’s population has been advised or forced to stay home, due to the Covid-19 emergency. Face-to-face interactions have been reduced to a minimum, while our use of digital devices has reached a novel maximum. The new normal of digital isolation coincides with our increased production of data as workers, citizens and consumers, and the decrease of industrial production strictu sensu. Our social life is almost entirely mediated by digital infrastructures populated by learning machines and predictive technologies, incessantly processing traces of users’ socially structured practices. It has never been so evident that studying how society unfolds requires us to treat algorithms as something more than cold mathematical objects. As Gillespie argues, ‘a sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms’ (2014: 169). This book sees culture as the warm human matter lying inside machine learning systems, and theorizes how to unpack it sociologically by means of the notion of machine habitus.
1 Why Not a Sociology of Algorithms?
Machines as sociological objects
Algorithms of various kinds hold the social world together. Financial transactions, dating, advertising, news circulation, work organization, policing tasks, music discovery, hiring processes, customer relations – all are to a large extent delegated to non-human agents embedded in digital infrastructures. For some years we have all been aware of this, thanks to academic research and popular books, journalistic reports and documentaries. Whether from the daily news headlines or the dystopian allegories of TV series, we have come to recognize that almost everything is now ‘algorithmic’ and that artificial intelligence is revolutionizing all aspects of human life (Amoore and Piotukh 2016). Even leaving aside the simplifications of popular media and the wishful thinking of techno-chauvinists, this is true for the most part (Broussard 2018; Sumpter 2018). Yet, many sociologists and social scientists continue to ignore algorithms and AI technologies in their research, or consider them at best a part of the supposedly inanimate material background of social life. When researchers study everyday life, consumption, social interactions, media, organizations, cultural taste or social representations, they often unknowingly observe the consequences of the opaque algorithmic processes at play in digital platforms and devices (Beer 2013a). In this book, I argue that it is time to see both people and intelligent machines as active agents in the ongoing realization of the social order, and I propose a set of conceptual tools for this purpose.
Why only now?, one may legitimately ask. In fact, the distinction between humans and machines has been a widely debated subject in the social sciences for decades (see Cerulo 2009; Fields 1987). Strands of sociological research such as Science and Technology Studies (STS) and Actor-Network Theory (ANT) have strongly questioned mainstream sociology’s lack of attention to the technological and material aspects of social life.
In 1985, Steve Woolgar’s article ‘Why Not a Sociology of Machines?’ appeared in the British journal Sociology. Its main thesis was that, just as a ‘sociology of science’ had appeared problematic before Kuhn’s theory of scientific paradigms but was later turned into an established field of research, intelligent machines should finally become ‘legitimate sociological objects’ (Woolgar 1985: 558). More than thirty-five years later, this is still a largely unaccomplished goal. When Woolgar’s article was published, research on AI systems was heading for a period of stagnation commonly known as the ‘AI winter’, which lasted up until the recent and ongoing hype around big-data-powered AI (Floridi 2020). According to Woolgar, the main goal of a sociology of machines was to examine the practical day-to-day activities and discourses of AI researchers. Several STS scholars have subsequently followed this direction (e.g. Seaver 2017; Neyland 2019). However, Woolgar also envisioned an alternative sociology of machines with ‘intelligent machines as the subjects of study’, adding that ‘this project will only strike us as bizarre to the extent that we are unwilling to grant human intelligence to intelligent machines’ (1985: 567). This latter option may not sound particularly bizarre today, given that a large variety of tasks requiring human intelligence are now routinely accomplished by algorithmic systems, and that computer scientists propose to study the social behaviour of autonomous machines ethologically, as if they were animals in the wild (Rahwan et al. 2019).
Even when technological artefacts could hardly be considered ‘intelligent’,1 actor-network theorists radically revised human-centric notions of agency by portraying both material objects and humans as ‘actants’, that is, as sources of action in networks of relations (Latour 2005; Akrich 1992; Law 1990). Based on this theoretical perspective, both a ringing doorbell and the author of this book can be seen as equally agentic (Cerulo 2009: 534). ANT strongly opposes not only the asymmetry between humans and machines, but also the more general ontological divide between the social and the natural, the animated and the material. This philosophical position has encountered a diffuse criticism (Cerulo 2009: 535; Müller 2015: 30), since it is hardly compatible with most of the anthropocentric theories employed in sociology – except for that of Gabriel Tarde (Latour et al. 2012). Still, one key intuition of ANT increasingly resonates throughout the social sciences, as well as in the present work: that what we call social life is nothing but the socio-material product of heterogeneous arrays of relations, involving human as well as non-human agents.
According to ANT scholar John Law (1990: 8), a divide characterized sociological research at the beginning of the 1990s. On the one hand, the majority of researchers were concerned with ‘the social’, and thus studying canonical topics such as inequalities, culture and power by focusing exclusively on people. On the other hand, a minority of sociologists were studying the ‘merely technical’ level of machines, in fields like STS or ANT. They examined the micro-relations between scientists and laboratory equipment (Latour and Woolgar 1986), or the techno-social making of aeroplanes and gyroscopes (MacKenzie 1996), without taking part to the ‘old’ sociological debates about social structures and political struggles (MacKenzie and Wajcman 1999: 19). It can be argued that the divide described by Law still persists today in sociology, although it has become evident that ‘the social order is not a social order at all. Rather it is a sociotechnical order. What appears to be social is partly technical. What we usually call technical is partly social’ (Law 1990: 10).
With the recent emergence of a multidisciplinary scholarship on the biases and discriminations of algorithmic systems, the interplay between ‘the social’ and ‘the technical’ has become more visible than in the past. One example is the recent book by the information science scholar Safiya Umoja Noble, Algorithms of Oppression (2018), which illustrates how Google Search results tend to reproduce racial and gender stereotypes. Far from being ‘merely technical’ and, therefore, allegedly neutral, the unstable socio-technical arrangement of algorithmic systems, web content, content providers and crowds of googling users on the platform contributes to the discriminatory social representations of African Americans. According to Noble, more than neutrally mirroring the unequal culture of the United States as a historically divided country, the (socio-)technical arrangement of Google Search amplifies and reifies the commodification of black women’s bodies.
I