In AI We Trust. Helga Nowotny. Читать онлайн. Newlib. NEWLIB.NET

Автор: Helga Nowotny
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781509548828
Скачать книгу
many uncertainties regarding how this would be played out were recognized, but the solutions offered were few.

      Other meetings in which I participated had the explicit aim of involving the general public in a discussion about the future of AI, such as the Nobel Week Dialogue 2015 in Gothenburg, or the Falling Walls Circle in Berlin in 2018. There were also visits to IT and robotics labs and workshops tasked with setting up various kinds of digital strategies. I gained much from ongoing discussions with colleagues at the Vienna Complexity Science Hub and members of their international network, allowing me glimpses into complexity science. By chance, I stumbled into an eye-opening conference on digital humanism, a trend that is gradually expanding to become a movement.

      Scattered and inconclusive as these conversations mostly were, they nevertheless projected the image of a dynamic field rapidly moving forward. The main protagonists were eager to portray their work as incorporating their responsibility of moving towards a ‘beneficial AI’ or similar initiatives. There was a notable impatience to demonstrate that AI researchers and promoters were aware of the risks involved, but the line between sincere concern and the insincere attempts of large corporations to claim ‘ethics ownership’ was often blurred as well. Human intelligence might indeed one day be outwitted by AI, but the discussants seldom dwelt on the difference between the two. Instead, they offered reassurances that the risks could be managed. Occasionally, the topic of human stupidity and the role played by ignorance were touched upon as well. And at times, a fascination with the ‘sweetness of technology’ shimmered through, similar to that J. Robert Oppenheimer described when he spoke about his infatuation with the atomic bomb.

      A haiku is said to be about capturing a fleeting moment, a transient impression or an ephemeral sensation. My impressions were obviously connected to the theme of the conference, the future of AI. ‘Future needs wisdom’ – the phrase stuck with me. Which future was I so concerned about? Would it be dominated by predictive algorithms? And if so, how would this change human behaviour and our institutions? What could I do to bring some wisdom into the future? What I have learned on my journey in digi-land is to listen carefully to the dissonances and overtones and to plumb the nuances and halftones; to spot the ambiguities and ambivalences in our approaches to the problems we face, and to hone the ability to glide between our selective memories of the past, a present that overwhelms us and a future that remains uncertain, but open.

      Plenty of books on AI and digitalization continue to flood the market. Most of the literature is written in an enthusiastic, technology-friendly voice, but there is also a sharp focus on the dark side of digital technologies. The former either provide a broad overview of the latest developments in AI and their economic benefits, or showcase some recently added features that are intended to alleviate fears that the machines will soon take over. The social impact of AI is acknowledged, as is the desirability of cross-disciplinary dialogue. A nod towards ethical considerations has by now become obligatory, but other problems are sidestepped and expected to be dealt with elsewhere. Only rarely, for instance, do we hear about topics like digital social justice. Finding my way through the copious literature on AI felt at times like moving through a maze, a deliberately confusing structure designed to prevent escape.

      At times, I felt that I was no longer caught in a maze but in what had become a labyrinth. This was particularly the case when the themes of the books turned to ‘singularity’ and transhumanism, topics that can easily acquire cult status and are permeated by theories, fantasies and speculations that the human species will soon transcend its present cognitive and physical limitations. In contrast to a maze with its tangled and twisted features, dead ends and meandering pathways, a labyrinth is carefully designed to have a centre that can be reached by following a single, unicursal path. It is artfully, and often playfully, arranged around geometrical figures, such as a circle or a spiral. No wonder that labyrinths have inspired many writers and artists to play with these forms and with the meaning-laden concept of a journey. If the points of departure and arrival are the same, the journey between them is expected to have changed something during the course of it. Usually, this is the self. Hence the close association of the labyrinth with a higher state of awareness or spiritual enlightenment.

      The labyrinth is an ancient cultic place, symbolizing a transformation, even if we know little about the rituals that were practised there. In the digital age, the imagined centre of the digital or computational labyrinth is the point where AI overtakes human intelligence, also called the singularity. At this point the human mind would be fused with an artificially created higher mind, and the frail and ageing human body could finally be left behind. The body and the material world are discarded as the newborn digital being is absorbed by the digital world or a higher digital order. Here we encounter an ancient fantasy, the recurring dream of immortality born from the desire to become like the gods, this time reimagined as the masters of the digital universe. I was struck by how closely the discussion of transcendental topics, like immortality or the search for the soul in technology, could combine with very technical matters and down-to-earth topics in informatics and computer science. I seemed that the maze could transform itself suddenly into a labyrinth, and vice versa.