And yet, here we are, facing a global sustainability crisis with many dire consequences and mounting geopolitical tensions. As I write, we are in the grip of a pandemic, with others to follow if the natural habitats of animals that carry zoonotic viruses capable of spreading to humans continue to be eroded. The deficiencies of our institutions, created in previous centuries and designed to meet challenges different from our own, stare us in the face. The spectre of social unrest and polarized societies has returned, when what is needed is greater social coherence, equality and social justice if we are to escape our current predicament.
We have embarked on a journey to live forward with predictive algorithms letting us see further ahead. Luckily, we have become increasingly aware of how crucial access to quality data of the right kind is. We are wary about the further erosion of our privacy and recognize that the circulation of wilful lies and hate speech on social media poses a threat to liberal democracy. We put our trust in AI while we also distrust it. This ambivalence is likely to last, for however smart the algorithms we entrust with agency when living forward in the digital age may be, they do not go beyond finding correlations.
Even the most sophisticated neural networks modelled on a simplified version of the brain can only detect regularities and identify patterns based on data that comes from the past. No causal reasoning is involved, nor does an AI pretend that it is. How can we live forward if we fail to understand Life as it has evolved in the past? Some computer scientists, such as Judea Pearl and others, deplore the absence of any search for cause–effect relationships. ‘Real intelligence’, they argue, involves causal understanding. If AI is to reach such a stage it must be able to reason in a counterfactual way. It is not sufficient merely to fit a curve along an indicated timeline. The past must be opened up in order to understand a sentence like ‘what would have happened if …’. Human agency consists in what we do, but understanding what we did in the past in order to make predictions about the future must always involve the counterfactual that we could have acted differently. In transferring human agency to an AI we must ensure that it has the capacity to ‘know’ this distinction that is basic to human reasoning and understanding (Pearl and Mackenzie 2018).
The power of algorithms to churn out practical and measurable predictions that are useful in our daily lives – whether in the management of health systems, in automated financial trading, in making businesses more profitable or expanding the creative industries – is so great that we easily sidestep or even forget the importance of the link between understanding and prediction. But we must not yield to the convenience of efficiency and abandon the desire to understand, nor the curiosity and persistence that underpin it (Zurn and Shankar 2020).
Two different ways of thinking about how to advance have long existed. One line of thought traces its lineage to the ancient fascination with automata and, more generally, to the smooth functioning of the machines that have fuelled technological revolutions, with their automated production lines devoted to increasing efficiency and lowering costs. This is where all the promises of automation enter, couched in wild technological dreams and imaginaries. Deep Learning algorithms will continue to equip computers with a statistical ‘understanding’ of language and thus expand their ‘reasoning’ capacity. There is confidence among AI practitioners that work on ethical AI is progressing well. The tacit assumption is that the dark side of digital technologies and all the hitherto unresolved problems will also be sorted out by an ultimate problem-solving intelligence, a kind of far-sighted, benign Leviathan fit to manage our worries and steer us through the conflicts and challenges facing humanity in the twenty-first century.
The other line of thinking insists that theoretical understanding is necessary and urgent, not only for mathematicians and computational scientists, but also for developing tools to assess the performance and output quality of Deep Learning algorithms and to optimize their training. This requires the courage to approach the difficult questions of ‘why’ and ‘how’, and to acknowledge both the uses and the limitations of AI. Since algorithms have huge implications for humans it will be important to make them fair and to align them with human values. If we can confidently predict that algorithms will shape the future, the question as to which kinds of algorithms will do the shaping is currently still open (Wigderson 2019).
Understanding also includes the expectation that we can learn how things work. If an AI system claims to solve problems at least as well as a human, then there is no reason not to expect and demand transparency and accountability from it. In practice, we are far from receiving satisfactory answers as to how the inner representations of AI work in sufficient detail, let alone an answer to the question of cause and effect. The awareness begins to sink in that we are about to lose something connected to what makes us human, as difficult to pin down as it is. Maybe the time has come to admit that we are not in control of everything, to humbly concede that our tenuous and risky journey of co-evolution with the machines we have built will be more fecund if we renew our attempt to understand our shared humanity and how we might live together better. We have to continue our exploration of living forward while trying to understand Life backwards and linking the two. Prediction will then no longer only map the trajectories of living forward for us, but will become an integral part of understanding how to live forward. Rather than foretelling what will happen, it will help us understand why things happen.
After all, what makes us human is our unique ability to ask the question: Why do things happen – why and how?
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.