?

Log in

No account? Create an account

Previous Entry Share Next Entry
A heretical view of the prisoners' dilemma
Signature
come_to_think
Reading:  Brian Hayes, "New Dilemmas for the Prisoner", American Scientist 101(6), 422--424 (2013); http://www.americanscientist.org/issues/pub/2013/6/new-dilemmas-for-the-prisoner

This article gives further weird news about an old paradox in game theory called the prisoners' dilemma.  Rid of the fancy story that gave it its name, the game is as follows:  Two players, who cannot communicate except thru their moves, must each simultaneously choose either to cooperate or to defect.  If both cooperate, each receives a modest reward.  If both defect, each receives a booby prize.  If one cooperates & the other defects, the cooperator receives nothing, and the defector receives a large reward.  Evidently, the sensible thing is for both players to cooperate & rake in the modest reward.  Each, however, is diabolically tempted to reason as follows: whether the other player cooperates or defects, I am better off defecting (better the booby prize than nothing; better the large reward than the modest one), and so I should defect.  If both fall for that (and if I do, why shouldn't you?), each ends up with the booby prize.

A vast amount of experimental & theoretical research, some of it rather funny, has been done on iterations of this game, either between the same two players, or between various pairs of players in a group who can remember each other's past behavior.  Thus, defectors can be chastened by the mistrust of others, and it turns out that, in various models, a stable pattern of cooperation may emerge.  This line of investigation is interesting in that it suggests how cooperation of various kinds might have evolved within the Darwinian struggle for life.  The article mentioned contains some odd surprises.

The case of a single play between strangers, however, remains an embarrassment.  Hayes says


In a single game against a player you'll never meet again, there's no escape from this doleful logic.

and that seems to have been the consensus among game theorists going back to Luce & Raiffa's classical textbook (1957).  They say

Of course, it is slightly uncomfortable that two so-called irrational players will both fare much better than two so-called rational ones.  Nevertheless, it remains true that a rational player . . . is always better off than an irrational player. . . .
. . . No, there appears to be no way around this dilemma.  We do not believe there is anything irrational or perverse about the choice of [defection on both sides], and we must admit that if we were actually in this position we would make these choices.

Experiments, however, show that quite a lot of people know better.  And indeed, I believe that the remarks I have quoted, which amount to saying that the Golden Rule is contrary to reason, constitute a reductio ad absurdum of existing game theory as a model of human rationality.  If I were a mathematical logician and came up with an axiomatization of arithmetic that looked plausible on its face, but turned out to allow for a hitherto unsuspected integer >0 and <1, I would not publish it as a warning to schoolchildren & accountants; I would look for the mistake.

My suspicion is that the mistake here lies in supposing that the modeling of other players as rational analogs to oneself --- as maximizers of some value function --- can be entirely free of caring for those others: that empathy (theory of mind) & sympathy are entirely independent notions.  Clearly, they are independent to some extent:  In general, if I am wicked, I can use my insight into your state of mind to torment you; and if I believe that you are wicked, I can use it to frustrate you.  But it ought to be possible to put something in the formalism to force the players' utility functions to stick to each other in cases of common interest like the prisoners' dilemma.

In the meantime, I suggest the following thought experiment (to perform it in actuality would be expensive):  Let a couple of hundred naive subjects be recruited and assigned to random pairs to play the game once, anonymously at computer terminals, say for monetary rewards of 0, $100, $300, & $500.  After the rewards are distributed, let the company be given a free lunch, at little square tables accommodating four each.  There are place cards generated by the computer assigning to each table two of the pairs from the game, so that if I played you we are sitting across from each other.  There are three kinds of pairs (cooperators, defectors, and mixed), and so there are six kinds of tables (cooperators + cooperators, cooperators + mixed, cooperators + defectors, mixed + mixed, mixed + defectors, and defectors + defectors); the computer makes the assignment so that the six kinds are present in as nearly equal numbers as possible.  The lunch is a good one, with a choice of food & drink to encourage conviviality.  In one version, the participants might be tagged C & D; in another, they might be allowed to divulge their action or conceal it or lie about it as they chose.  A pleasant floral arrangement at the center of each table contains an omnidirectional microphone, and all conversations are recorded for subsequent study.

Exercise:  Imagine the conversation at one or another table.  A table with four cooperators, of course, has an easy time.  Four defectors, one may suppose, grimly congratulate each other on their rationality.  A pair of defectors lectures a pair of cooperators on how irrational they have been, and one of the latter says "Thank you for your advice; we'll be happy to rave all the way to the bank".  The mixed cases might result in some profanity.

  • 1
Well this is just observing that you can never rule out the possibility of cooperating again in the future. The presumption of being able to play the game only once is historically unusual and contrary to instinct. But at the same time urban life presents many encounters of that sort.

I don't know why people are at all surprised at the rational and practical amorality of encounters with strangers, never seeing each other again seems like the very core of amorality, the condition in which morality can never work in the first place.

The presumption of being able to play the game only once may have been unusual during the first million years of the human race, but by historical times it was common enough to have attracted the attention of moralists -- e.g., in the parable of the good Samaritan. "Which, now, of these three, thinkest thou, was neighbor unto him that fell among the thieves?" (Luke 10:36). People you are unlikely to see again are to be treated like neighbors all the same! Of course, in the N.T. that advice is contaminated by millenarianism -- explicitly so in the parable of the sheep & the goats: "Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me" (Matt. 25:40) -- you will be rewarded right soon in heaven. But many people follow it every day without the help of any such vulgar hope. I'm sure you can supply your own examples.

Strangers cooperate in trivial cases because of instinct, but I don't get the fixation on the "paradox" - yes, instinct is counterindicated by rationality, but so what? Nobody expects instincts to be rational, and having a desirable second-order effect does not make those instincts rational either. If I were a defector at a mixed table, I'd be telling each of the co-operators "hey, you could have got more money". And that's true, because their goals are individual, not collective, so the advice applies to the individual, not to the pair.

The idea that there is some escape from the tragedy of the one-shot game, other than happy accident, actually undermines morality, because the whole lesson of the one-shot game is "don't play one-shot games!". Utilitarianism as a foundation for morality is a lie, it's really just a heuristic for groups who have achieved cooperation already. So the idea that the outcome of the one-shot game is undesirable in the first place is on very shaky ground.

The foundation of ethics is not about fixing the dilemma, which is a lost cause, but about avoiding it entirely through the building of durable relationships. Utilitarianism pretends relationships don't matter, which is why promoting it as a foundation guarantees failure.

"Strangers cooperate in trivial cases": They also cooperate in nontrivial cases. The case of the good Samaritan was not trivial from the point of view of the victim, who was in bad trouble, or from that of the Samaritan, who went quite a bit out of his way to help.

One need not have recourse to antique propaganda for examples. In 1971, driving from Eau Claire, WI, to Denver, I picked up a hitchhiker somewhere in Iowa who was also headed for Denver. When the gas gauge got down to 1/4, I started looking for a filling station, but that was insufficient prudence in Nebraska on I-80 at night. One after another was closed, and I ran out of gas. I emptied my camping stove into the tank, and that got us a little way, but a couple of miles short of the next exit, at which there was an all-night truck stop. So, leaving the hitchhiker with the car, I set off on foot on the shoulder with my headlamp. Some trucks passed, but did not stop for me. At length, a car stopped & picked me up. It was my car. A driver _going east_ had seen my car on the other side, guessed what had happened, gone to the next exit, turned around, gone to the truck stop, bought a gallon of gas, made the circuit again, delivered it to the hitchhiker, and sped off without introducing himself or accepting payment. (We got to Denver at 3 a.m. He put me up at his place so I would not have to wake up the people I would be joining. He was gay, and it would perfect the story if I had gotten laid, but that would be a lie, right up against a discourse on morality %^].)

Maybe our benefactor was a Christian & remembered the parable, or maybe he had had a similar experience himself & followed Heinlein's advice: If you can't pay it back, pay it forward. Very likely, in any case, he had a more generous conception of rationality than comes natural to decision theorists.

"Because of instinct": It seems to me that instinct pushes the other way ("I'm all right, Jack"). "In righteous defiance of instinct" is more like it. I will grant, however, that it may happen because of _habit_: unconscious generalization from situations in which reciprocation is possible.

"I'd be telling each of the cooperators 'hey, you could have got more money'": I'm afraid we would have spoiled each other's lunch.

"The foundation of ethics": I don't believe that ethics has a foundation; it is a network, not a hierarchy, except locally for expository purposes.

"Utilitarianism pretends that relationships don't matter": That is true of some extreme versions; I remember reading one such theorist, I forget who, who condemned the notion of special duties to one's family, friends, country, etc., and asked rhetorically: What is it about the mere word "my" that can set aside the dictates of ethics? But sensible people take that question seriously and try to answer it, tho it indeed raises many difficulties. (Personal) relationships do matter, but they are not all that needs to matter. Duties arise on all scales & on all levels of abstraction.

I don't think that either of your examples really qualify as one-shot games from the game theory perspective. The people involved have a pretty decent chance of having some future relationship. Or at least have an instinct that suggests that scenario in their mind. So I see both the good Samaritan and your gas delivery as people acting in a plausibly rationally self-interested way. It's just that the self-interest sometimes takes some examination to discover.

Not every altruistic decision is currently explicable through selfish motives, but selfish motives are explaining seemingly altruistic behaviour in surprising ways. You seem to think there is something transcendent in a moral act, something that can't be reduced to science. I probably would make the same decisions you would in most situations, but still assume it's all reducible to entirely cold-blooded evolutionary strategy even if I didn't know the details.

It occurs to me that the one-shot dilemma is something like the frictionless plane of Newtonian physics. You are puzzled because you've never seen anything like it, and indeed, a physics based on an impossibility does seem suspect. But that doesn't make the theory wrong, merely counterintuitive.

  • 1