That said, there’s still considerable skepticism around how effective a tool fMRI is and how robust some of its findings are. It’s also fair to say that some of these findings challenge deeply held beliefs about many of the things we hold dear, including the nature of free will, moral choice, kindness, compassion, and empathy. These are all aspects of ourselves that help define who we are as a person. Yet, with the advent of fMRI and other neuroscience-based tools, it sometimes feels like we’re teetering on the precipice of realizing that who we think we are—our sense of self, or our “soul” if you like—is merely an illusion of our biology.
This in itself raises questions over the degree to which neuroscience is racing ahead of our ability to cope with what it reveals. Yet the reality is that this science is progressing at breakneck speed, and that fMRI is allowing us to dive ever deeper behind our outward selves—our facial features and our easily observed behaviors—and into the very fabric of the organ that plays such a role in defining us. And, just like phrenology and eugenics before it, it’s opening up the temptation to interpret how our brains operate as a way to predict what sort of person we are, and what we might do.
In 2010, researchers provided a group of subjects with advice on the importance of using sunscreen every day. At the same time, the subjects’ brain activity was monitored using fMRI. It’s just one of many studies that are increasingly trying to use real-time brain activity monitoring to predict behavior.
In the sunscreen study, the subjects were asked how likely they were to take the advice they were given. A week later, researchers checked in with them to see how they’d done. Using the fMRI scans, the researchers were able to predict which subjects were going to use sunscreen and which were not. But more importantly, using the scans, the researchers discovered they were better at predicting how the subjects would behave than they themselves were. In other words, the researchers knew their subjects’ minds better than they did.39
Research like this suggests that our behavior is determined by measurable biological traits as much as by our free will, and it’s pushing the boundaries of how we understand ourselves and how we behave, both as individuals and as a society. And, while science will never enable us to predict the future in the same way as Minority Report’s precogs, it’s not too much of a stretch to imagine that fMRI and similar techniques may one day be used to predict the likelihood of someone engaging in antisocial and morally questionable behavior.
But even if predicting behavior based on what we can measure is potentially possible, is this a responsible direction to be heading in?
The problem is, just as with research that tries to tie facial features, head shape, or genetic heritage to a propensity to engage in criminal behavior, fMRI research is equally susceptible to human biases. It’s not so much that we can collect data on brain activity that’s problematic; it’s how we decide what data to collect, and how we end up interpreting and using it, that’s the issue.
A large part of the challenge here is understanding what the motivation is behind the research questions being asked, and what subtle underlying assumptions are nudging a complex series of scientific decisions toward results that seem to support these assumptions.
Here, there’s a danger of being caught up in the misapprehension that the scientific method is pure and unbiased, and that it’s solely about the pursuit of truth. To be sure, science is indeed one of the best tools we have to understand the reality of how the world around us and within us works. And it is self-correcting—ultimately, errors in scientific thinking cannot stand up to the scrutiny the scientific method exposes them to. Yet this self-correcting nature of science takes time, sometimes decades or centuries. And until it self-corrects, science is deeply susceptible to human foibles, as phrenology, eugenics, and other misguided ideas have all too disturbingly shown.
This susceptibility to human bias is greatly amplified in areas where the scientific evidence we have at our disposal is far from certain, and where complex statistics are needed to tease out what we think is useful information from the surrounding noise. And this is very much the case with behavioral studies and fMRI research. Here, limited studies on small numbers of people that are carried out under constrained conditions can lead to data that seem to support new ideas. But we’re increasingly finding that many such studies aren’t reproducible, or that they are not as generalizable as we at first thought. As a result, even if a study does one day suggest that a brain scan can tell if you’re likely to steal the office paper clips, or murder your boss, the validity of the prediction is likely to be extremely suspect, and certainly not one that has any place in informing legal action—or any form of discriminatory action—before any crime has been committed.
Machine Learning-Based Precognition
Just as in Minority Report, the science and speculation around behavior prediction challenges our ideas of free will and justice. Is it just to restrict and restrain people based on what someone’s science predicts they might do? Probably not, because embedded in the “science” are value judgments about what sort of behavior is unwanted, and what sort of person might engage in such behavior. More than this, though, the notion of pre-justice challenges the very idea that we have some degree of control over our destiny. And this in turn raises deep questions about determinism versus free will. Can we, in principle, know enough to fully determine someone’s actions and behavior ahead of time, or is there sufficient uncertainty and unpredictability in the world to make free will and choice valid ideas?
In Chapter Two and Jurassic Park, we were introduced to the ideas of chaos and complexity, and these, it turns out, are just as relevant here. Even before we have the science pinned down, it’s likely that the complexities of the human mind, together with the incredibly broad and often unusual panoply of things we all experience, will make predicting what we do all but impossible. As with Mandelbrot’s fractal, we will undoubtedly be able to draw boundaries around more or less likely behaviors. But within these boundaries, even with the most exhaustive measurements and the most powerful computers, I doubt we will ever be able to predict with absolute certainty what someone will do in the future. There will always be an element of chance and choice that determines our actions.
Despite this, the idea that we can predict whether someone is going to behave in a way that we consider “good” or “bad” remains a seductive one, and one that is increasingly being fed by technologies that go beyond fMRI.
In 2016, two scientists released the results of a study in which they used machine learning to train an algorithm to identify criminals based on headshots alone.40 The study was highly contentious and resulted in a significant public and academic backlash, leading the paper’s authors to state in an addendum to the paper, “Our work is only intended for pure academic discussions; how it has become a media consumption is a total surprise to us.”41
Their work hit a nerve for many people because it seemed to reinforce the idea that criminal behavior is something that can be predicted from measurable physiological traits. But more than this, it suggested that a computer could be trained to read these traits and classify people as criminal or non-criminal, even before they’ve committed a crime.
The authors vehemently resisted suggestions that their work was biased or inappropriate, and took pains to point out that others were misinterpreting it. In fact, in their addendum, they point out, “Nowhere in our paper advocated the use of our method as a tool of law enforcement, nor did our discussions advance from correlation to causality.”
Nevertheless, in the original paper, they conclude: “After controlled for race, gender and age, the general law-biding [sic] public have facial appearances that vary in a significantly lesser degree than criminals.” It’s hard to interpret this as anything other than a conclusion that machines and artificial intelligence could be developed that distinguish between people who have criminal tendencies and those who do not.
Part of why this is deeply disturbing is that it taps into the issue of “algorithmic bias”—our ability to create artificial-intelligence-based apps and machines that reflect the unconscious (and sometimes conscious) biases of those who develop them. Because of this, there’s a very real possibility that an artificial judge and