SuperCooperators. Roger Highfield. Читать онлайн. Newlib. NEWLIB.NET

Автор: Roger Highfield
Издательство: Ingram
Серия:
Жанр произведения: Социология
Год издания: 0
isbn: 9780857860453
Скачать книгу
went on to suggest that a large proportion of human emotion and experience—such as gratitude, sympathy, guilt, trust, friendship, and moral outrage—grew out of the same sort of simple reciprocal tit-for-tat logic that governed the daily interactions between big fish and the smaller marine life that scrubbed their gills. These efforts built on earlier attempts to explain how reciprocity drives social behavior. In the Nicomachean Ethics, Aristotle discusses how the best form of friendship involves a relationship between equals—one in which a genuinely reciprocal relationship is possible. In Plato’s Crito, Socrates considers whether citizens might have a duty of gratitude to obey the laws of the state, in much the way they have duties of gratitude to their parents for their existence, sustenance, and education. Overall, one fact shines through: reciprocity rules.

      THE ITERATED DILEMMA

      Since the Prisoner’s Dilemma was first formulated in 1950, it has been expressed in many shapes, forms, and guises. The game had been played in a repeated form before, but Trivers made a new advance when he introduced the repeated game to an analysis of animal behavior. This iterative Prisoner’s Dilemma is possible in a colony of vampire bats and at the cleaning stations used by fish on a reef, which were the subject of Trivers’s paper.

      However, the implications of what happens when the Prisoner’s Dilemma is played over and over again were first described before Trivers’s analysis in 1965 by a smart double act: Albert Chammah, who had emigrated from Syria to the United States to study industrial engineering, and Anatol Rapoport, a remarkable Russian-born mathematician-psychologist who used game theory to explore the limits of purely rational thinking and would come to dedicate himself to the cause of global peace. In their book, Prisoner’s Dilemma, they gave an account of the many experiments in which the game had been played.

      Around the time that Trivers made his contribution, another key insight into the game had come from the Israeli mathematician Robert J. Aumann, who had advised on cold war arms control negotiations in the 1960s and would go on to share the Nobel Prize in Economics in 2005. Aumann had analyzed the outcome of repeated encounters and demonstrated the prerequisites for cooperation in various situations—for instance, where there are many participants, when there is infrequent interaction, and when participants’ actions lack transparency.

      In the single shot game, the one that I analyzed earlier in the discussion of the payoff matrix of the Prisoner’s Dilemma, it was logical to defect. But Aumann showed that peaceful cooperation can emerge in a repeated game, even when the players have strong short-term conflicting interests. One player will collaborate with another because he knows that if he is cheated today, he can go on to punish the cheat tomorrow. It seemed that prospect of vengeful retaliation paves the way for amicable cooperation. By this view, cooperation can emerge out of nothing more than the rational calculation of self-interest. Aumann named this insight the “folk theorem”—one that had circulated by word of mouth and, like so many folk songs, has no original author and has been embellished by many people. In 1959, he generalized it to games between many players, some of whom might gang up on the rest.

      This theorem, though powerful, does not tell you how to play the game when it is repeated. The folk theorem says there is a strategy that can induce a rational opponent to cooperate, but it does not say what is a good strategy and what is a bad one. So, for example, it could show that cooperation is a good response to the Grim strategy. That strategy says that I will cooperate as long as you cooperate, but if you defect once then I will permanently switch to defection. In reality, such strategies are far from being the best way to stimulate cooperation in long-drawn-out games.

      To find out how to play the game, thinkers in the field had to wait for a novel kind of tournament, one that would shed light on all the nuances of the repeated Prisoner’s Dilemma. This was developed by Robert Axelrod, a political scientist at the University of Michigan, who turned the results into a remarkable book, The Evolution of Cooperation, which opens with the arresting line “Under what conditions will cooperation emerge in a world of egoists without central authority?” In his direct prose, Axelrod clearly described how he had devised a brilliant new way to tease out the intricacies of the Dilemma.

      He organized an unusual experiment, a virtual tournament in a computer. The “contestants” were programs submitted by scientists so they could be pitted against each other in repeated round-robin Prisoner’s Dilemma tournaments. This was the late 1970s and at that time the idea was breathtakingly novel. To put his tournaments in context—commercial, coin-operated video games had only appeared that same decade. But Axelrod’s idea was no arcade gimmick. Unlike humans, who get bored, computers can tirelessly play these strategies against each other and unbendingly stick to the rules.

      Researchers around the world mailed Axelrod fourteen different programs. He added one of his own—one that randomly cooperates and defects—and pitched all of them against each other in a round-robin tournament. Success was easy to measure. The winner would be the strategy that received the highest number of points after having played all other strategies in the computer over two hundred moves. During the entire tournament, Axelrod explored 120,000 moves and 240,000 choices.

      Because the computers allowed for limitless complexity of the programs entered into the tournament, one might expect that the biggest—and thus “smartest”—program would win. But size is not everything. In fact, the simplest contestant won hands down, much to the surprise of the theorists. The champion turned out to be a measly little four line computer program devised by none other than Anatol Rapoport.

      Called Tit for Tat, the strategy starts with a cooperative move and then always repeats the co-player’s previous move. A player always starts by keeping faith with his partner but from then on mimics the last move of his opponent, betraying only when his partner betrays him. This is more forgiving than Grim, where a single defection triggers an eternity of defection.

      Standing back from the Prisoner’s Dilemma, it is easy to see the advantage of adopting a simple strategy. If you are too clever, your opponent may find it hard to read your intentions. If you appear too unresponsive or obscure or enigmatic, your adversary has no incentive to cooperate with you. Equally, if a program (or a person for that matter) acts clearly and sends out a signal that it cannot be pushed around, it does make sense to cooperate.

      What was also striking was that this discovery was old hat. The contestants in the computer Prisoner’s Dilemma tournament already knew about this powerful strategy. Work published at the start of that decade had shown that Tit for Tat does well. Indeed, the strategy carries echoes of the one that the nuclear powers had adopted during the cold war, each promising not to use its stockpiles of A- and H-bombs so long as the other side also refrained. Many of the contestants tried to improve on this basic recipe. “The striking fact is that none of the more complex programs submitted was able to perform as well as the original, simple Tit for Tat,” observed Axelrod.

      When he looked in detail at the high-ranking and low-ranking strategies to tease out the secret of success, Axelrod found one property in particular appeared to be important. “This is the property of being nice, which is to say never being the first to defect.” This strategy is interesting because it does not bear a grudge beyond the immediate retaliation, thereby perpetually furnishing the opportunity of establishing “trust” between opponents: if the opponent is conciliatory, both reap the rewards of cooperation.

      Axelrod went on to organize a second tournament, this time attracting sixty-three entries from six countries, ranging from a ten-year-old computer obsessive to a gaggle of professors of various persuasions. One entry arrived from the British biologist John Maynard Smith, whom we will learn much more about later. Maynard Smith submitted Tit for Two Tats, a strategy that cooperates unless the opponent has defected twice in a row. Maynard Smith, a revered figure in his field, limped in at twenty-fourth place.

      Rapoport, however, followed the maxim of British soccer leagues: “Never change a winning team.” Once more, he fielded the Tit-for-Tat strategy, and once again it won: it really did pay to follow this simple strategy. This was the very tournament that had inspired Karl Sigmund to focus on the Dilemma and that, in turn, inspired me when he gave me that sermon on the mountain.