Although most of this book was written before a new virus wreaked havoc around the globe, exacerbated by the uncoordinated and often irresponsible policy response that followed, it is still marked by the impact of the COVID-19 pandemic. Unexpectedly, the emergence of the coronavirus crisis revealed the limitations of predictions. A pandemic is one of those known unknowns that are expected to happen. It is known that more are likely to occur, but it is unknown when and where. In the case of the SARS-CoV-2 virus, the gap between the predictions and the lack of preparedness soon became obvious. We are ready to blindly follow the predictions algorithms deliver about what we will consume, our future behaviour and even our emotional state of mind. We believe what they tell us about our health risks and that we should change our lifestyles. They are used for police profiling, court sentencing and much more. And yet we were unprepared for a pandemic that had been long predicted. How could this have gone so wrong?
Thus the COVID-19 crisis, itself likely to turn from an emergency into a more chronic condition, strengthened my conviction that the key to understanding the changes we are living through is linked to what I call the paradox of prediction. When human behaviour, flexible and adaptive as it is, begins to conform to what the predictions foretell, we risk returning to a deterministic world, one in which the future has already been set. The paradox is poised at the dynamic but volatile interface between present and future: predictions are obviously about the future, but they act directly on how we behave in the present.
The predictive power of algorithms enables us to see further and to assess the various outcomes of emergent properties in complex systems obtained through simulation models. Backed by vast computational power, and trained on an enormous amount of data extracted from the natural and social world, we can observe predictive algorithms in action and analyse their impact. But the way we do this is paradoxical in itself: we crave to know the future, but largely ignore what predictions do to us. When do we believe them and which ones do we discard? The paradox stems from the incompatibility between an algorithmic function as an abstract mathematical equation, and a human belief which may or may not be strong enough to propel us to action.
Predictive algorithms have acquired a rare power that unfolds in several dimensions. We have come to rely on them in ways that include scientific predictions with their extensive range of applications, like improving weather forecasts or the numerous technological products designed to create new markets. They are based on techniques of predictive analytics that have resulted in a wide range of products and services, from the analysis of DNA samples to predict the risk of certain diseases, to applications in politics where the targeting of specific groups whose voting profile has been established through data trails has become a regular feature of campaigning. Predictions have become ubiquitous in our daily lives. We trade our personal data for the convenience, efficiency and cost-savings of the products we are offered in return by the large corporations. We feed their insatiable appetite for more data and entrust them with information about our most intimate feelings and behaviour. We seem to have embarked on an irreversible track of trusting them. Predictive analytics reigns supreme in financial markets where automated trading and fintech risk assessments were installed long ago. They are the backbone of the military’s development of autonomous weapons, the actual deployment of which would be a nightmare scenario.
However, the COVID-19 pandemic has revealed that we are far less in control than we thought. This is not due to faulty algorithms or a lack of data, although the pandemic has revealed the extent of grossly underestimating the importance of access to quality data and its interoperability. There was no need for predictive algorithms to warn of future epidemics; epidemiological models and Bayesian statistical reasoning were sufficient. But the warnings went unheard. The gap between knowing and doing persists if people do not want to know or offer many reasons to justify their inaction. Thus, predictions must also always be seen in context. They can fall on fallow ground or lure us into following them blindly. Predictive analytics, although couched in the probabilities of our ignorance, comes as a digital package that we gladly receive, but rarely see a need to unpack. They appear as refined algorithmic products, produced by a system that appears impenetrable to most of us, and often jealously guarded by the large corporations that own them.
Thus, the observations made during my patchy journey began to converge on the power of prediction and especially the power exerted by predictive algorithms. This allowed me to ask questions such as ‘how does Artificial Intelligence change our conception of the future and our experience of time?’ I could return to my long-standing involvement with the study of social time, and in particular the concept of Eigenzeit, which was the subject of a book I wrote in the late 1980s. A few years ago I followed up with ‘Eigenzeit. Revisited’, in which I analysed the changes introduced through our interaction with digital media and devices that had by then become our daily companions (Nowotny 2017). New temporal relationships have emerged with those who are physically distant but digitally close, so that absence and presence as well as physical and digital location have converged in an altered experience of time.
Neither I nor others could have imagined the meaning that terms like physical and social distancing would acquire only a few years later. In the midst of the COVID-19 pandemic, I saw my earlier diagnosis about an extended present confirmed. My argument had been that the line separating the present from the future was dissolving as the dynamics of innovation, spearheaded by science and technology, opened up the present to the many new options that were becoming available. The present was being extended as novel technologies and their social selection and appropriation had to be accommodated. Much of what had seemed possible only in a far-away future now invaded the present. This altered the experience of time. The present was becoming both compressed and densified while extending into the immediate future (Nowotny 1989).
What I observe now is that the future has arrived. We are living not only in a digital age but in a digital time machine. A machine fuelled by predictive algorithms that produce the energy to thrust us beyond the future that has arrived into an unknown future that we desperately seek to unravel. Hence, we scramble to compile forecasts and engage in manifold foresight exercises, attempting to gain a measure of control over what appears otherwise uncontrollable because of its unpredictable complexity. Predictive algorithms and analytics offer us reassurance as they lay out the trajectories for future behaviour. We attribute agency to them and feel heartened by the messages they deliver on the predictions that concern us most. Such is our craving for certainty that even in cases when the forecast is negative, we feel relieved that we at least know what will happen. In offering such assurance, algorithmic predictions can help us to cope with uncertainty and, at least partly, give us back some control of the future.
My background in science and technology studies (STS) allowed me to bridge the gap between science and society and reach a better understanding of the frictions and mutual misunderstandings that beset this tenuous and tension-ridden relationship. STS opens up the possibility of observing how research is actually carried out in practice and allows us to analyse the social structures and processes that underpin how science works. The pandemic has merely added a new twist, albeit a largely unfortunate one. While at the beginning of the pandemic science took centre-stage, combined with the expectation that a vaccine could soon be developed and therapeutic cures were in the pipeline, science soon became mired in political opportunism. A nasty ‘vaccine nationalism’ arose, while science was sidestepped by COVID-19 deniers and conspiracy theories that began to flourish together with anti-vax and extreme-right political movements. After a brief and bright interlude, the interface between science, politics and the public became troubled again.
The pandemic offered an advanced testing ground, especially for the biomedical sciences, whose recourse to Artificial Intelligence and the most recent digital technologies proved to be a great asset. It allowed them to sequence the genomes of the virus and its subsequent mutations in record time, with researchers sharing samples around the world and repurposing equipment in their labs to provide added