In AI We Trust. Helga Nowotny. Читать онлайн. Newlib. NEWLIB.NET

Автор: Helga Nowotny
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781509548828
Скачать книгу
acted again as my very special personal advisor, reading, commenting and encouraging me when I was doubtful whether I would ever finish. His unfailing humour provided me with the necessary distance from what I was doing. My thanks to him are as much for the ways in which he interrupted my writing as for everything else he gave me during this period.

      Origins: time and uncertainty; science, technology and society

      This book is the outcome of a long personal and professional journey. It brings together two strands of my previous work while confronting the major societal transformations that humanity is undergoing right now: the ongoing processes of digitalization and our arrival in the epoch of the Anthropocene. Digitalization moves us towards a co-evolutionary trajectory of humans and machines. It is accompanied by unprecedented technological feats and the trust we put into Artificial Intelligence. But there are also concerns about continuing losses of privacy, what the future of work will be like, and the risks AI may pose for liberal democracies. This creates widespread feelings of ambivalence: we trust in AI as a bet on our future, but we also realize that there are reasons for distrust. We are learning to live with the digital devices we cheerfully interact with as though they were our new relatives, our digital others, while retaining a profound ambivalence towards them and the techno-corporate complex that produces them.

      My journey leading up to this book was long and full of surprises. My previous work on time, especially the structure and experience of social time, led me to inquire how our daily exposure to and interaction with AI and the digital devices that have become our intimate companions alter our experience of time once again. How does the confrontation with geological timescales, long-term atmospheric processes or the half-life of the dissolution of microplastic and toxic waste affect the temporalities of our daily lives? How does AI impinge on the temporal dimension of our relationship with each other? Are we witnessing the emergence of something we can call ‘digital time’ that has now intruded into the familiar nested temporal hierarchy of physical, biological and social times? If so, how do we negotiate and coordinate these different kinds of time as our lives unfold?

      Not surprisingly, I encountered several hurdles on my way, but I also realized that my previous long-standing interest in the study of time and the cunning of uncertainty – which, I argued, we should embrace – allowed me to connect aspects of my personal experience and biographical incidents with empirical studies and scientific findings. Such personal links, however, no longer seemed available when confronting the likely consequences of climate change, loss of biodiversity and the acidification of oceans, or issues like the future of work when digitalization begins to affect middle-class professionals. Like many others confronted with media images of disastrous wildfires, floods and rapidly melting arctic ice, I could see that the stakes had become very high. I kept reading scientific reports that put quantitative estimates on the timelines when we would reach several of the possible tipping points in further environmental degradation, leading to the collapse of the ecosystem. And, again like many others, I felt exposed to the worries and hopes, the opportunities and likely downsides, connected with the ongoing digitalization.

      Yet, despite all these observations and analyses, a gap remained between the global scale on which these processes unfolded and my personal life which, fortunately, continued without major perturbations. Even the local impacts were being played out either in far-away places or remained local in the sense that they were soon to be overtaken by other local events. Most of us are cognizant that these major societal transformations will have huge impacts and numerous unintended consequences; and yet, they remain on a level of abstraction that is so overwhelming it is difficult to grasp intellectually in all its complexity. The gap between knowing and acting, between personal insight and collective action, between thinking at the level of the individual and thinking institutions globally, appears to shield us from the immediate impact that these far-reaching changes will have.

      The behaviour of complex systems is difficult for us to grasp and often appears counter-intuitive. It is exemplified by the famous butterfly effect, where the sensitive dependence on initial conditions can result in large differences at a later stage, as when the flapping of a butterfly’s wings in the Amazon leads to a tornado making landfall in Texas. But such metaphors are not always at hand, and I began to wonder whether we are even able to think in non-linear ways. Predictions about the behaviour of dynamic complex systems often come in the garb of mathematical equations embedded in digital technologies. Simulation models do not speak directly to our senses. Their outcome and the options they produce need to be interpreted and explained. Since they are perceived as being scientifically objective, they are often not questioned any further. But then predictions assume the power of agency that we attribute to them. If blindly followed, the predictive power of algorithms turns into a self-fulfilling prophecy – a prediction becomes true simply because people believe in it and act accordingly.

      So, I set out to bridge the divide between the personal, in this case the predictions we experience as being addressed to us as individuals, and the collective as represented by complex systems. We are familiar and at ease with messages and forms of communication at the inter-personal level, while, unless we adopt a professional and scientific stance, we experience everything connected with a system as an external, impersonal force that impinges on us. Might it not be, I wondered, that we are so easily persuaded to trust a predictive algorithm because it reaches us on a personal level, while we distrust the digital system, whatever we mean by it or associate with it, because it is perceived as impersonal?