I came away from the maze largely feeling that it is an overrated marketplace where existing products are rapidly displaced by new ones selected primarily for their novelty value. Depending on the mood of potential buyers, utopian or dystopian visions would prevail, subject to market volatility. The labyrinth, of course, is a more intriguing and enchanting place where deep philosophical questions intersect with the wildest speculations. Here, at times, I felt like Ariadne, laying out the threads that would lead me out from the centre of the labyrinth. One of these threads is based on the idea of a digital humanism, a vision that human values and perspectives ought to be the starting point for the design of algorithms and AI systems that claim to serve humanity. It is based on the conviction that such an alternative is possible.
Another thread is interwoven with the sense of direction that takes its inspiration from a remarkable human discovery: the idea of the future as an open horizon, full of as yet unimaginable possibilities and inherently uncertain. The open horizon extends into the vast space of what is yet unknown, pulsating with the dynamics of what is possible. Human creativity is ready to explore it, with science and art at the forefront. It is this conception of the future which is at stake when predictive algorithms threaten to fill the present with their apparent certainty, and when human behaviour begins to conform to these predictions.
The larger frame of this book is set by a co-evolutionary trajectory on which humankind has embarked together with the digital machines it has invented and deployed. Co-evolution means that a mutual interdependence is in the making, with flexible adaptations on both sides. Digital beings or entities like the robots created by us are mutating into our significant Others. We have no clue where this journey will lead or how it will end. However, in the long course of human evolution, it is possible that we have become something akin to a self-domesticating species that has learned to value cooperation and, at least to some extent, decrease its potential for aggression. That capacity for cooperation could now extend to digital machines. We have already reached the point of starting to believe that the algorithm knows us better than we know ourselves. It then comes to be seen as a new authority to guide the self, one that knows what is good for us and what the future holds.
The road ahead: how to live forward and understand life backwards
Scientific predictions are considered the hallmark of modern science. Notably physics advances by inventing new theoretical concepts and the instruments to test predictions derived from them. The computational revolution that began in the middle of the last century has been boosted by the vastly increased computational power and Deep Learning methods that took off in the twenty-first century. Together with access to an unprecedented and still growing amount of data, these developments have extended the power of predictions and their applicability across an enormous range of natural and social phenomena. Scientific predictions are no longer confined to science.
Ever since, predictive analytics has become highly profitable for the economy and pervaded the entire social fabric. The operation of algorithms underlies the functioning of technological products that have disrupted business models and created new markets. Harnessed by the marketing and advertisement industry, instrumentalized by politicians seeking to maximize votes, and quickly adopted by the shadowy world of secret services, hackers and fraudsters exploiting the anonymity of the internet, the use of predictive analytics has convinced consumers, voters and health-conscious citizens that these powerful digital instruments are there to serve our needs and latent desires.
Much of their successful spread and eager adoption is due to the fact that the power of predictive algorithms is performative. An algorithm has the capability to make happen what it predicts when human behaviour follows the prediction. Performativity means that what is enacted, pronounced or performed can affect action, as shown in the pioneering work on the performativity of speech acts and non-verbal communication by J. L. Austin, Judith Butler and others. Another well-known social phenomenon is captured in the Thomas theorem – ‘If men define situations as real, they are real in their consequences’ – dating back to 1928 and later reformulated by Robert K. Merton in terms of self-fulfilling prophecy. The time has come to acknowledge what sociologists have long since known and apply it also to predictive algorithms.
The propensity of people to orient themselves in relation to what others do, especially in unexpected or threatening circumstances, enhances the power of predictive algorithms. It magnifies the illusion of being in control. But if the instrument gains the upper hand over understanding we lose the capacity for critical thinking. We end up trusting the automatic pilot while flying blindly in the fog. There are, however, situations in which it is crucial to deactivate the automatic pilot and exercise our own judgement as to what to do.
When visualizing the road ahead, I see a situation where we have created a highly efficient instrument that allows us to follow and foresee the evolving dynamics of a wide range of phenomena and activities, but where we largely fail to understand the causal mechanisms that underlie them. We rely increasingly on what predictive algorithms tell us, especially when institutions begin to align with their predictions, often unaware of the unintended consequences that will follow. We trust not only the performative power of predictive analytics but also that it knows which options to lay out for us, again without considering who has designed these options and how, or that there might be other options equally worth considering.
At the same time, distrust of AI creeps in and the concerns grow. Some of them, like the fears about surveillance or the future of work, are well known and widely discussed. Others are not so obvious. When self-fulfilling prophecies begin to proliferate, we risk returning to a deterministic worldview in which the future appears as predetermined and hence closed. The space vital to imagining what could be otherwise begins to shrink. The motivation as well as the ability to stretch the boundaries of imagination is curtailed. To rely purely on the efficiency of prediction obscures the need for understanding why and how. The risk is that everything we treasure about our culture and values will atrophy.
Moreover, in a world governed by predictive analytics there is neither a place nor any longer the need for accountability. When political power becomes unaccountable to those over whom it is exercised, we risk the destruction of liberal democracy. Accountability rests on a basic understanding of cause and effect. In a democracy, this is framed in legal terms and is an integral part of democratically legitimated institutions. If this is no longer guaranteed, surveillance becomes ubiquitous. Big data gets even bigger and data is acquired without understanding or explanation. We become part of a fine-tuned and interconnected predictive system that is dynamically closed upon itself. The human ability to teach to others what we know and have experienced begins to resemble that of a machine that can teach itself and invent the rules. Machines have neither empathy nor a sense of responsibility. Only humans can be held accountable and only humans have the freedom to take on responsibility.
Luckily, we have not arrived at this point as yet. We can still ask: Do we really want to live in an entirely predictable world in which predictive analytics invades and guides our innermost thoughts and desires? This would mean renouncing the inherent uncertainty of the future and replacing it with the dangerous illusion of being in control. Or are we ready to acknowledge that a fully predictable world is never achievable? Then we would have to muster the courage to face the danger that a falsely perceived deterministic world implies. This book has been written as an argument against the illusion of a wholly predictable world and for the courage – and wisdom – needed to live with uncertainty.
Obviously, my journey does not end there. ‘Life can only be understood backwards, but it must be lived forward.’ This quotation from Søren Kierkegaard awaits an interpretation in relation to our movements between online and offline worlds, between the virtual self, the imagined self and the ‘real’ self. How does one live forward under these conditions, given their opportunities and constraints? The quotation implies a disjunction between Life as an abstraction that transcends the personal,