Duty Free Art. Hito Steyerl. Читать онлайн. Newlib. NEWLIB.NET

Автор: Hito Steyerl
Издательство: Ingram
Серия:
Жанр произведения: Изобразительное искусство, фотография
Год издания: 0
isbn: 9781786632456
Скачать книгу
surface to face enduring invisibility. In the end, however, a face without a butt cannot sit. It has to take a stand. And a butt without a face needs a stand-in for most kinds of communication. Proxy politics happens between taking a stand and using a stand-in. It is in the territory of displacement, stacking, subterfuge, and montage that both the worst and the best things happen.

       5

       A Sea of Data: Apophenia and Pattern (Mis-)Recognition

Images

      This is an image from the Snowden files. It is labeled “secret.”1 Yet one cannot see anything on it. This is exactly why it is symptomatic.

      Not seeing anything intelligible is the new normal. Information is passed on as a set of signals that cannot be picked up by human senses. Contemporary perception is machinic to a large degree. The spectrum of human vision only covers a tiny part of it. Electric charges, radio waves, light pulses encoded by machines for machines are zipping by at slightly subluminal speed. Seeing is superseded by calculating probabilities. Vision loses importance and is replaced by filtering, decrypting, and pattern recognition. Snowden’s image of noise could stand in for a more general human inability to perceive technical signals unless they are processed and translated accordingly.

      But noise is not nothing. On the contrary, noise is a huge issue, not only for the NSA but for machinic modes of perception as a whole.

      Signal v. Noise was the title of a column on the internal NSA website running from 2011 to 2012. It succinctly frames the NSA’s main problem: how to extract “information from the truckloads of data”: “It’s not about the data or even access to the data. It’s about getting information from the truck-loads of data … Developers, please help! We’re drowning (not waving) in a sea of data—with data, data everywhere, but not a drop of information.”2

      Analysts are choking on intercepted communication. They need to unscramble, filter, decrypt, refine, and process “truckloads of data.” The focus moves from acquisition to discerning, from scarcity to overabundance, from adding on to filtering, from research to pattern recognition. This problem is not restricted to secret services. Even WikiLeaks’s Julian Assange states: “We are drowning in material.”3

      Apophenia

      But let’s return to the initial image. The noise on it was actually decrypted by GCHQ technicians to reveal a picture of clouds in the sky. British analysts have been hacking video feeds from Israeli drones since at least 2008, a period which includes the recent IDF aerial campaigns against Gaza.4 But no images of these attacks exist in Snowden’s archive. Instead, there are all sorts of abstract renderings of intercepted broadcasts. Noise. Lines. Color patterns.5 According to leaked training manuals, one needs to apply all sorts of massively secret operations to produce these kinds of images.6

      But let me tell you something. I will decrypt this image for you without any secret algorithm. I will use a secret ninja technique instead. And I will even teach you how to do it for free. Please focus very strongly on this image right now.

Images

      Doesn’t it look like a shimmering surface of water in the evening sun? Is this perhaps the “sea of data” itself? An overwhelming body of water, which one could drown in? Can you see the waves moving ever so slightly?

      I am using a good old method called apophenia.

      Apophenia is defined as the perception of patterns within random data.7 The most common examples are people seeing faces in clouds or on the moon. Apophenia is about “drawing connections and conclusions from sources with no direct connection other than their indissoluble perceptual simultaneity,” as Benjamin Bratton recently argued.8

      One has to assume that, sometimes, analysts also use apophenia.

      Someone must have seen the face of Amani al-Nasasra in a cloud. The forty-three-year-old was blinded by an aerial strike in Gaza in 2012 while sitting in front of her TV:

      “We were in the house watching the news on TV. My husband said he wanted to go to sleep, but I wanted to stay up and watch Al Jazeera to see if there was any news of a ceasefire. The last thing I remember, my husband asked if I changed the channel and I said yes. I didn’t feel anything when the bomb hit—I was unconscious. I didn’t wake up again until I was in the ambulance.” Amani suffered second degree burns and was largely blinded.9

      What kind of “signal” was extracted from what kind of “noise” to suggest that al-Nasasra was a legitimate target? Which faces appear on which screens, and why? Or to put it differently: Who is “signal,” and who disposable “noise”?

      Pattern Recognition

      Jacques Rancière tells a mythical story about how the separation of signal and noise might have been accomplished in Ancient Greece. Sounds produced by affluent male locals were defined as speech, whereas women, children, slaves, and foreigners were assumed to produce garbled noise.10 The distinction between speech and noise served as a kind of political spam filter. Those identified as speaking were labeled citizens and the rest as irrelevant, irrational, and potentially dangerous nuisances. Similarly, today, the question of separating signal and noise has a fundamental political dimension. Pattern recognition resonates with the wider question of political recognition. Who is recognized on a political level and as what? As a subject? A person? A legitimate category of the population? Or perhaps as “dirty data”?

      What is dirty data? Here is one example:

      Sullivan, from Booz Allen, gave the example the time his team was analyzing demographic information about customers for a luxury hotel chain and came across data showing that teens from a wealthy Middle Eastern country were frequent guests.

      “There were a whole group of 17-year-olds staying at the properties worldwide,” Sullivan said. “We thought, ‘That can’t be true.’”11

      The data was dismissed as dirty data—messed up and worthless sets of information—before someone found out that, actually, it was true.

      Brown teenagers, in this worldview, are likely to exist. Dead brown teenagers? Why not? But rich brown teenagers? This is so improbable that they must be dirty data and cleansed from your system! The pattern emerging from this operation to separate noise and signal is not very different from Rancière’s political noise filter for allocating citizenship, rationality, and privilege. Affluent brown teenagers seem just as unlikely as speaking slaves and women in the Greek polis.

      On the other hand, dirty data is also something like a cache of surreptitious refusal; it expresses a refusal to be counted and measured:

      A study of more than 2,400 UK consumers by research company Verve found that 60% intentionally provided wrong information when submitting personal details online. Almost one quarter (23%) said they sometimes gave out incorrect dates of birth, for example, while 9% said they did this most of the time and 5% always did it.12

      Dirty data is where all of our refusals to fill out the constant onslaught of online forms accumulate. Everyone is lying all the time, whenever possible, or at least cutting corners. Not surprisingly, the “dirtiest” area of data collection is consistently pointed out to be the health sector, especially in the US. Doctors and nurses are singled out for filling out forms incorrectly. It seems that health professionals are just as unenthusiastic about filling out forms for systems designed to replace them as consumers are about performing clerical work for corporations that will spam them in return.

      In his book The Utopia of Rules, David Graeber gives a profoundly moving example of the forced extraction of data. After his mom suffered a stroke, he went through the ordeal of having to apply for Medicaid on her behalf:

      I