NONZERO  THE LOGIC OF HUMAN DESTINY  By  ROBERT WRIGHT
Home Thumbnail Summary Introduction Table of Contents and Excerpts Excerpts from Reviews About the Author Buy the Book

 

PART I: A BRIEF HISTORY OF HUMANKIND

PART II: A BRIEF HISTORY OF ORGANIC LIFE

PART III: FROM HERE TO ETERNITY

 

 

 

 

 

 

 

 

 

In zero-sum games, the fortunes of the players are inversely related. In tennis, in chess, in boxing, one contestant's gain is the other's loss. In non-zero-sum games, one player's gain needn't be bad news for the other(s). Indeed, in highly non-zero-sum games the players' interests overlap entirely. In 1970, when the three Apollo 13 astronauts were trying to figure out how to get their stranded spaceship back to earth, they were playing an utterly non-zero-sum game, because the outcome would be either equally good for all of them or equally bad. (It was equally good.)

Back in the real world, things are usually not so clear-cut. A merchant and a customer, two members of a legislature, two childhood friends sometimes—but not always—find their interests overlapping. To the extent that their interests do overlap, their relationship is non-zero-sum; the outcome can be win-win or lose-lose, depending on how they play the game.

[SNIP]

...The seminal exercise in the computer simulation of evolving non-zero-sum relationships is described in Robert Axelrod's classic book The Evolution of Cooperation. It involved that most famous of non-zero-sum games, the prisoner's dilemma.

Actually, the fame is in some ways unfortunate, because the prisoner's dilemma has a couple of quirks that impede intuitive comprehension. For one thing, in this game the object is to get the lowest score, since the score represents how many years each player has to spend in prison. The second counterintuitive feature is that in this game to "cheat" is to tell the truth and to "cooperate" is to not tell the truth. But, for better or worse, the prisoner's dilemma is the textbook non-zero-sum game, so we'll here explore it by way of showing how, with the help of game theory, evolution can be simulated on a computer.

In the prisoner's dilemma, two partners in crime are being interrogated separately. The state lacks the evidence to convict them of the crime they committed but does have enough evidence to convict both on a lesser charge bringing, say, a one-year prison term for each. The prosecutor wants conviction on the more serious charge, and pressures each man individually to confess and implicate the other. She says: "If you confess but your partner doesn't, I'll let you off free and use your testimony to lock him up for ten years. And if you don't confess, yet your partner does, you go to prison for ten years. If you confess and your partner does too, I'll put you both away, but only for three years." The question is: Will the two prisoners cooperate with each other, both refusing to confess? Or will one or both of them "defect" ("cheat")?

As noted in chapter 8, the prisoner's dilemma has some key features that have shaped the unfolding of non-zero-sumness during human history.

First, there's the importance of communication. If the two prisoners can't communicate with each other, and both behave logically, they will almost certainly both suffer as a result. To see this, just pretend you're one of the prisoners and run through your options one by one. Suppose, first of all, that your partner cheats on you by copping a plea. Then you're better off cheating on him and confessing: you get three years in jail, as opposed to the ten you'd get by staying silent. (In the payoff matrix shown below, this fact is signified by comparing the first digit in the upper-left and lower-left quadrants.) Now suppose your partner doesn't cheat—doesn't confess. You're still better off cheating, because then you get out of jail, whereas if you stayed mum like your partner, you'd each get a one-year jail term. So the logic seems irresistible: don't cooperate with your partner; cheat on him.

HIS STRATEGY

Confess (cheat)

Stay mum (cooperate)

YOUR

STRATEGY

Confess (cheat) You: 3 years Him: 3 years You: 0 years Him: 10 years
Stay mum (cooperate) You: 10 years Him: 0 years You: 1 year Him: 1 year

But if both of you follow this logic, and both cheat, then you'll both get three years in jail. (That is, you both wind up in the upper-left-hand quadrant.) And if both of you hadn't cheated—if both had stayed mum—you would have just gotten one year in jail. So mutual mumness is, relatively speaking, the win-win outcome. But it makes no sense for either of you to stay mum unless you've both been assured by the other that he will stay mum, too. That is why communication is vital.

A second key feature of the prisoner's dilemma is the importance of trust. It is crucial that when your partner and you assure each other you will stay mum, you believe each other. After all, if you suspect that your partner may renege on the deal, then you're better off repaying his cheating with cheating, copping a plea yourself: a three-year prison sentence as opposed to a ten-year sentence. What's more, your suspicions are hardly irrational, since your partner does have a temptation to cheat: If he confesses while you honor your deal and stay mum, he gets to walk. (What are you going to do? Sue him?)

Axelrod organized a tournament that amounted to a simulation of biological evolution. Several dozen people submitted computer programs that embodied particular strategies for playing the prisoner's dilemma. The programs were then allowed to interact with each other—as if they constituted a kind of society. Upon each interaction, the two programs involved would "decide"—on the basis of their algorithms—whether to cheat or cooperate. (Often, in making this decision, they would draw on their memory of how the other program had behaved in past encounters.) Depending on what each had decided, both would receive a score representing the outcome of that encounter.

Then each would move on to the next encounter, with another program. In each round, there would be enough encounters so that every program interacted with every other program 200 times. At the end of each round, the scores for each program, each "player," were added up. Programs were then allowed to "replicate" in proportion to their score. So the better your program did in one round—one "generation"—the more copies of it there would be in the next generation.

The winning program was called "Tit for Tat," submitted by Anatol Rapaport (whose 1960 book Fights, Games, and Debates is a very nice introduction to game theory). Tit for Tat's strategy was very simple. On its first encounter with any given program, it would cooperate. On subsequent encounters, it would do whatever that program had done on the previous occasion. In short, Tit for Tat would reward past cooperation with present cooperation and would punish past cheating with present cheating. Generation by generation, Tit for Tat came to dominate the population, so that, more and more, Tit for Tats spent their time interacting with other Tit for Tats. Such interactions invariably blossomed into stable, cooperative relationships. As the game wore on, the "society" of players in Axelrod's computer exhibited more and more amity and order.

One striking thing about this evolution of cooperation is that it transpired without the players being allowed to communicate with each other—even though communication, in the generic non-zero-sum situation, is considered a prerequisite for a reliably positive outcome. The reason this could happen is that players would encounter the same players again and again (making this an "iterated" prisoner's dilemma). Thus, by observing what a given player had done on the last occasion, another player could, in effect, gather information about the player's likely future behavior. (This is, in a sense, a kind of de facto communication—and certainly a form of information transmission.) What's more, players could punish each other for past cheating and reward each other for past cooperation.

By showing how cooperation could evolve without formal communication, Axelrod had shown how reciprocal altruism [see chapters 2 and 20] could evolve in animals that don't do much talking—including chimpanzees and vampire bats. He had also shown how stable, cooperative relationships could form in a very small society of humans without much explicit discussion; so long as the same players encounter each other day after day—as in a small hunter-gatherer society—trust could develop even with little explicit commitment.

Of course, through cultural evolution, the settings for non-zero-sum games have gotten much less intimate than a hunter-gatherer society. Chances are you've never met the person who made your shoes. In fact, chances are that any one person who had a hand in making your shoes has never met all the other people who had a hand in it. A key feature of cultural evolution has been to make it possible for such non-zero-sum games to get played over great distances, among a large number of players. And in these kinds of situations, typically, there does need to be explicit communication (however circuitous), and there do need to be explicit means of sustaining trust. Hence the importance of evolving information technology in expanding the scope and complexity of social organization. Hence, too, the importance of evolving "technologies of trust" (often, though not always, in the form of laws enforced by a government) in helping to realize the non-zero-sum potential that new information technologies (and other technologies) create.

Axelrod later used the computer and game theory to simulate the cultural evolution of norms. I suspect that, with computer power now dirt cheap, and graduate students always hungry for a fresh angle on old subjects, we will begin to see a lot more computer simulations of cultural evolution (not involving the prisoner's dilemma necessarily, but involving zero-sum and non-zero-sum games). In fact, it would surprise me if such efforts aren't already under way. If these efforts prove fruitful, that will be a kind of vindication of the use of game-theory terminology in arguments about the dynamics of cultural evolution.

An excerpt from the Introduction and Appendix I of Nonzero: The Logic of Human Destiny, By Robert Wright, published by Pantheon Books. Copyright 2000 by Robert Wright. www.nonzero.org