Other meetings in which I participated had the explicit aim of involving the general public in a discussion about the future of AI, such as the Nobel Week Dialogue 2015 in Gothenburg, or the Falling Walls Circle in Berlin in 2018. There were also visits to IT and robotics labs and workshops tasked with setting up various kinds of digital strategies. I gained much from ongoing discussions with colleagues at the Vienna Complexity Science Hub and members of their international network, allowing me glimpses into complexity science. By chance, I stumbled into an eye-opening conference on digital humanism, a trend that is gradually expanding to become a movement.
Scattered and inconclusive as these conversations mostly were, they nevertheless projected the image of a dynamic field rapidly moving forward. The main protagonists were eager to portray their work as incorporating their responsibility of moving towards a ‘beneficial AI’ or similar initiatives. There was a notable impatience to demonstrate that AI researchers and promoters were aware of the risks involved, but the line between sincere concern and the insincere attempts of large corporations to claim ‘ethics ownership’ was often blurred as well. Human intelligence might indeed one day be outwitted by AI, but the discussants seldom dwelt on the difference between the two. Instead, they offered reassurances that the risks could be managed. Occasionally, the topic of human stupidity and the role played by ignorance were touched upon as well. And at times, a fascination with the ‘sweetness of technology’ shimmered through, similar to that J. Robert Oppenheimer described when he spoke about his infatuation with the atomic bomb.
At one of the many conferences I attended on the future of AI, the organizers had decided to use an algorithm in order to maximize diversity within each group. The AI was also tasked to come up with four different haikus, one for each group. (Incidentally, the first time an AI succeeded in accomplishing such a ‘creative’ task was back in the 1960s.) The conference was a success and the discussions within each ‘haiku group’ were rewarding, but somehow I felt dissatisfied with the haiku the AI had produced for my group. So, on the plane on my way back I decided to write one myself – my first ever. With beginner’s luck the last line of my haiku read ‘future needs wisdom’.
A haiku is said to be about capturing a fleeting moment, a transient impression or an ephemeral sensation. My impressions were obviously connected to the theme of the conference, the future of AI. ‘Future needs wisdom’ – the phrase stuck with me. Which future was I so concerned about? Would it be dominated by predictive algorithms? And if so, how would this change human behaviour and our institutions? What could I do to bring some wisdom into the future? What I have learned on my journey in digi-land is to listen carefully to the dissonances and overtones and to plumb the nuances and halftones; to spot the ambiguities and ambivalences in our approaches to the problems we face, and to hone the ability to glide between our selective memories of the past, a present that overwhelms us and a future that remains uncertain, but open.
The maze and the labyrinth
None of these encounters and discussions prepared me for the surprise I got when I began to scan the available literature more systematically. There is a lot of it out there already, and a never-ending stream of updates that keep coming in. I concluded that much of it must have been written in haste, as if trying to catch up with the speed of actual developments. Sometimes it felt like being on an involuntary binge, overloaded with superfluous information while feeling intellectually undernourished. Most striking was the fact that the vast majority of books in this area espouse either an optimistic, techno-enthusiastic view or a dystopian one. They are often based on speculations or simply describe to a lay audience what AI nerds are up to and how digital technologies will change people’s lives. I came away with a profound dissatisfaction about how issues and topics that I considered important were being treated: the approach was largely short-term and ahistorical, superficial and mostly speculative, often espousing a narrow disciplinary perspective, unable to connect technological developments with societal processes in a meaningful way, and occasionally arrogant in dismissing ‘the social’ or misreading it as a mere appendix to ‘the technological’.
Plenty of books on AI and digitalization continue to flood the market. Most of the literature is written in an enthusiastic, technology-friendly voice, but there is also a sharp focus on the dark side of digital technologies. The former either provide a broad overview of the latest developments in AI and their economic benefits, or showcase some recently added features that are intended to alleviate fears that the machines will soon take over. The social impact of AI is acknowledged, as is the desirability of cross-disciplinary dialogue. A nod towards ethical considerations has by now become obligatory, but other problems are sidestepped and expected to be dealt with elsewhere. Only rarely, for instance, do we hear about topics like digital social justice. Finding my way through the copious literature on AI felt at times like moving through a maze, a deliberately confusing structure designed to prevent escape.
In this maze there are plenty of brightly lit pathways, their walls lined with the latest gadgetry, proudly displaying features designed to take the user into a virtual wonderland. The darker groves in the maze are filled with images and dire warnings of worse things to come, occasionally projecting a truly apocalyptic digital ending. Sci-fi occupies several specialized niches, often couched in an overload of technological imagination and an underexposed social side. In between there are a large number of mundane small pathways, some of which turn out to be blind alleys. One can also find useful advice on how to cope with the daily nitty-gritty annoyances caused by digital technologies or how to work around the system. Plenty of marketing pervades the maze, conveying a sense of short-lived excitement and a readiness to be pumped up again to deliver the next and higher dose of digital enhancement.
At times, I felt that I was no longer caught in a maze but in what had become a labyrinth. This was particularly the case when the themes of the books turned to ‘singularity’ and transhumanism, topics that can easily acquire cult status and are permeated by theories, fantasies and speculations that the human species will soon transcend its present cognitive and physical limitations. In contrast to a maze with its tangled and twisted features, dead ends and meandering pathways, a labyrinth is carefully designed to have a centre that can be reached by following a single, unicursal path. It is artfully, and often playfully, arranged around geometrical figures, such as a circle or a spiral. No wonder that labyrinths have inspired many writers and artists to play with these forms and with the meaning-laden concept of a journey. If the points of departure and arrival are the same, the journey between them is expected to have changed something during the course of it. Usually, this is the self. Hence the close association of the labyrinth with a higher state of awareness or spiritual enlightenment.
The labyrinth is an ancient cultic place, symbolizing a transformation, even if we know little about the rituals that were practised there. In the digital age, the imagined centre of the digital or computational labyrinth is the point where AI overtakes human intelligence, also called the singularity. At this point the human mind would be fused with an artificially created higher mind, and the frail and ageing human body could finally be left behind. The body and the material world are discarded as the newborn digital being is absorbed by the digital world or a higher digital order. Here we encounter an ancient fantasy, the recurring dream of immortality born from the desire to become like the gods, this time reimagined as the masters of the digital universe. I was struck by how closely the discussion of transcendental topics, like immortality or the search for the soul in technology, could combine with very technical matters and down-to-earth topics in informatics and computer science. I seemed that the maze could transform itself suddenly into a labyrinth, and vice versa.
In practice, however, gaps in communication prevail. Those who worry about the potential risks that digital technologies pose for liberal democracies discover that experts working on the risks have little interest in democracy or much understanding of politics. Those writing on the future of work rarely speak to those engaged in the actual design of the automated systems that will either put people out of work or create new jobs. Many computer scientists and IT experts are clearly aware of the biases and other flaws in their products, and they deplore the constraints that come from being