In AI We Trust. Helga Nowotny. Читать онлайн. Newlib. NEWLIB.NET

Автор: Helga Nowotny
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781509548828
Скачать книгу
technological system. But at heart they are convinced that the solutions to many of the problems besetting society will arise from technology. Meanwhile, humanists either retreat to their historical niche or act in defence of humanistic values. The often-stated goal of interdisciplinarity, it seems, is not yet much advanced in practice.

      I came away from the maze largely feeling that it is an overrated marketplace where existing products are rapidly displaced by new ones selected primarily for their novelty value. Depending on the mood of potential buyers, utopian or dystopian visions would prevail, subject to market volatility. The labyrinth, of course, is a more intriguing and enchanting place where deep philosophical questions intersect with the wildest speculations. Here, at times, I felt like Ariadne, laying out the threads that would lead me out from the centre of the labyrinth. One of these threads is based on the idea of a digital humanism, a vision that human values and perspectives ought to be the starting point for the design of algorithms and AI systems that claim to serve humanity. It is based on the conviction that such an alternative is possible.

      Another thread is interwoven with the sense of direction that takes its inspiration from a remarkable human discovery: the idea of the future as an open horizon, full of as yet unimaginable possibilities and inherently uncertain. The open horizon extends into the vast space of what is yet unknown, pulsating with the dynamics of what is possible. Human creativity is ready to explore it, with science and art at the forefront. It is this conception of the future which is at stake when predictive algorithms threaten to fill the present with their apparent certainty, and when human behaviour begins to conform to these predictions.

      Scientific predictions are considered the hallmark of modern science. Notably physics advances by inventing new theoretical concepts and the instruments to test predictions derived from them. The computational revolution that began in the middle of the last century has been boosted by the vastly increased computational power and Deep Learning methods that took off in the twenty-first century. Together with access to an unprecedented and still growing amount of data, these developments have extended the power of predictions and their applicability across an enormous range of natural and social phenomena. Scientific predictions are no longer confined to science.

      Much of their successful spread and eager adoption is due to the fact that the power of predictive algorithms is performative. An algorithm has the capability to make happen what it predicts when human behaviour follows the prediction. Performativity means that what is enacted, pronounced or performed can affect action, as shown in the pioneering work on the performativity of speech acts and non-verbal communication by J. L. Austin, Judith Butler and others. Another well-known social phenomenon is captured in the Thomas theorem – ‘If men define situations as real, they are real in their consequences’ – dating back to 1928 and later reformulated by Robert K. Merton in terms of self-fulfilling prophecy. The time has come to acknowledge what sociologists have long since known and apply it also to predictive algorithms.

      The propensity of people to orient themselves in relation to what others do, especially in unexpected or threatening circumstances, enhances the power of predictive algorithms. It magnifies the illusion of being in control. But if the instrument gains the upper hand over understanding we lose the capacity for critical thinking. We end up trusting the automatic pilot while flying blindly in the fog. There are, however, situations in which it is crucial to deactivate the automatic pilot and exercise our own judgement as to what to do.

      At the same time, distrust of AI creeps in and the concerns grow. Some of them, like the fears about surveillance or the future of work, are well known and widely discussed. Others are not so obvious. When self-fulfilling prophecies begin to proliferate, we risk returning to a deterministic worldview in which the future appears as predetermined and hence closed. The space vital to imagining what could be otherwise begins to shrink. The motivation as well as the ability to stretch the boundaries of imagination is curtailed. To rely purely on the efficiency of prediction obscures the need for understanding why and how. The risk is that everything we treasure about our culture and values will atrophy.

      Moreover, in a world governed by predictive analytics there is neither a place nor any longer the need for accountability. When political power becomes unaccountable to those over whom it is exercised, we risk the destruction of liberal democracy. Accountability rests on a basic understanding of cause and effect. In a democracy, this is framed in legal terms and is an integral part of democratically legitimated institutions. If this is no longer guaranteed, surveillance becomes ubiquitous. Big data gets even bigger and data is acquired without understanding or explanation. We become part of a fine-tuned and interconnected predictive system that is dynamically closed upon itself. The human ability to teach to others what we know and have experienced begins to resemble that of a machine that can teach itself and invent the rules. Machines have neither empathy nor a sense of responsibility. Only humans can be held accountable and only humans have the freedom to take on responsibility.

      Obviously, my journey does not end there. ‘Life can only be understood backwards, but it must be lived forward.’ This quotation from Søren Kierkegaard awaits an interpretation in relation to our movements between online and offline worlds, between the virtual self, the imagined self and the ‘real’ self. How does one live forward under these conditions, given their opportunities and constraints? The quotation implies a disjunction between Life as an abstraction that transcends the personal,