In his insightful book The Master Switch, Tim Wu, originator of the term “Net neutrality,” explains why this may be the biggest media and communications policy battle ever waged. “While there were once distinct channels of telephony, television, radio, and film,” Wu writes, “all information forms are now destined to make their way increasingly along the master network that can support virtually any kind of data traffic.” Convergence has raised the stakes. “With every sort of political, social, cultural, and economic transaction having to one degree or another now gone digital, this proposes an awesome dependence on a single network, and no less vital need to preserve its openness from imperial designs,” Wu warns. “This time is different: with everything on one network, the potential power to control is so much greater.”
While we like to imagine the Internet as a radical, uncontrollable force—it’s often said the system was designed to survive a nuclear attack—it is in fact vulnerable to capture by the private interests we depend on for access. In 2010, rulings by the FCC based on a controversial proposal put forth by Verizon and Google established network neutrality on wired broadband but failed to extend the common carrier principle to wireless connections; in other words, network neutrality rules apply to the cable or DSL service you use at home but not to your cell phone. In 2013, Google showed further signs of weakening its resolve on the issue when it began to offer fiber broadband with advantageous terms of service that many observers found violate the spirit of Net neutrality.40
Given the steady shift to mobile computing, including smartphones, tablets, and the emerging Internet-of-things (the fact that more and more objects, from buildings to cars to clothing, will be networked in coming years), the FCC’s 2010 ruling was already alarmingly insufficient when it was made. Nevertheless, telecommunications companies went on offense, with Verizon successfully challenging the FCC’s authority to regulate Internet access in federal appeals court in early 2014. But even as the rules were struck down, the judges acknowledged concerns that broadband providers represent a real threat, describing the kind of discriminatory behavior they were declaring lawful: companies might restrict “end-user subscribers’ ability to access the New York Times website” in order to “spike traffic” to their own news sources or “degrade the quality of the connection to a search website like Bing if a competitor like Google paid for prioritized access.”41
Proponents of Net neutrality maintain that the FCC rules were in any case riddled with loopholes and the goal now is to ground open Internet rules and the FCC’s authority on firmer legal footing (namely by reclassifying broadband as a “telecommunications” and not an “information” service under Title II of the Communications Act, thereby automatically subjecting ISPS to common carrier obligations.) Opponents contend that Net neutrality would unduly burden telecom companies, which should have the right to dictate what travels through their pipes and charge accordingly, while paving the way for government control of the Internet. As a consequence of the high stakes, Net neutrality—a fight for the Internet as an open platform—has become a cause célèbre, and rightly so. However arcane the discussion may sometimes appear, the outcome of this battle will profoundly affect us all, and it is one worth fighting for.
Yet openness at the physical layer is not enough. While an open network ensures the equal treatment of all data—something undoubtedly essential for a democratic networked society—it does not sweep away all the problems of the old-media model, failing to adequately address the commercialization and consolidation of the digital sphere. We need to find other principles that can guide us, principles that better equip us to comprehend and confront the market’s role in shaping our media system, principles that help us rise to the unique challenge of bolstering cultural democracy in a digital era. Openness cannot protect us from, and can even perpetuate, the perils of a peasant’s kingdom.
Not that many years ago, Laura Poitras was living in Yemen, alone, waiting. She had rented a house close to the home of Abu Jandal, Osama bin Laden’s former bodyguard and the man she hoped would be the subject of her next documentary. He put her off when she asked to film him, remaining frustratingly elusive. Next week, he’d tell her, next week, hoping the persistent American would just go away.
“I was going through hell,” Poitras said, sitting in her office a few months after the premiere of her movie The Oath, the second in her trilogy of documentaries about foreign policy and national security after September 11. “I just didn’t know if it was going to be two years, ten years, you know?” She waited, sure there was a story to be told and that it was extraordinary, but not sure if she’d be allowed to tell it. As those agonizing months dragged on, she did her best to be productive and pursued other leads. During Ramadan Poitras was invited to the house of a man just released from Guantánamo, whom she hoped to interview. “People almost had a heart attack that I was there,” Poitras recounts. “I didn’t film. I was shut down, and I was sat with the women. They were like, ‘Aren’t you afraid that they’re going to cut your head off?’”
Bit by bit Abu Jandal opened up. Poitras would go home with only three or four hours of footage, but what she caught on tape was good enough to keep her coming back, a dozen times in all. “I think it probably wasn’t until a year into it that I felt that I was going to get a film,” Poitras said. A year of waiting, patience, uprootedness, and uncertainty before she knew that her work would come to anything.
With the support of PBS and a variety of grants, The Oath took almost three years to make, including a solid year in the editing room. The film’s title speaks of two pledges: one made by Jandal and others in al-Qaeda’s inner circle promising loyalty to bin Laden and another made by an FBI agent named Ali Soufan, who interrogated Abu Jandal when he was captured by U.S. forces. “Soufan was able to extract information without using violence,” Poitras has said, and he testified to Congress against violent interrogation tactics. “One of his reasons is because he took an oath to the Constitution. In a broad sense, the film is about whether these men betrayed their loyalties to their oaths.”1
“I always think, whenever I finish a film, that I would never have done that if I had known what it would cost emotionally, personally.” The emotional repercussions of disturbing encounters can be felt long after the danger has passed; romantic relationships are severed by distance; the future is perpetually uncertain. Poitras, however, wasn’t complaining. She experiences her work as a gift, a difficult process but a deeply satisfying one, and was already busy planning her next project, about the erosion of civil liberties in the wake of the war on terror.
In January 2013 she was contacted by an anonymous source that turned out to be Edward Snowden, the whistle-blower preparing to make public a trove of documents revealing the National Security Administration’s massive secret digital surveillance program. He had searched Poitras out, certain that she was someone who would understand the scope of the revelations and the need to proceed cautiously. Soon she was on a plane to Hong Kong to shoot an interview that would shake the world and in the middle of another film that would take her places she never could have predicted at the outset.2
No simple formula explains the relationship between creative effort and output, nor does the quantity of time invested in a project correlate in any clear way to quality—quality being, of course, a slippery and subjective measure in itself. We can appreciate obvious skill, such as the labor of musicians who have devoted decades to becoming masters of their form, but it’s harder to assess work that is more subjective, more oblique, or less polished.
Complex