Traveler’s Dilemma: When it’s smart to be dumb

Some game theory paradoxes can be resolved by assuming that people adopt multiple personae, and aren’t rational.

Now that the airlines are done with it, your suitcase looks like a gorilla stomped on it. The antique vase you’d packed so carefully is smashed.

GAME THEORY MATRIX Game theory typically describes games using a matrix like the one above. In this game, you and your opponent each choose to say “yes” or “no.” The first number shows the amount you’d win and the second number shows the amount your opponent would win. If you are both playing rationally, game theory would recommend that you say “yes” (and get 6 points) and your opponent says “no” (and gets 1 point), because neither of you can get more points by changing your own move without changing your partner’s move. If you alone switched, your payoff would go from 6 to 4, and if your opponent alone switched, his payoff would go from 1 to 0. A play like this is called a “Nash Equilibrium.” Adapted from a graphic by David H. Wolpert

IRRATIONALITY PAYS OFF Your opponent will make more money in this game if he chooses to behave irrationally and pick “yes” or “no” at random. If you know that’s his strategy, you’re better off changing your answer to “no.” In that case, half the time you’ll get 4 points and half the time you’ll get 5, for an expected payoff of 4.5. If you said “yes,” your expected payoff would be only 3 (the average of 6 and 0). Then, because your strategy makes your move fixed, his expected payoff will be the average of 5 and 6, or 5.5. If he had played rationally, on the other hand, you would have gotten 6 and he would have gotten 1. So you’ll get less money, and he’ll get more. Adapted from a graphic by David H. Wolpert

Never fear, the airline representative reassures you. The airline will reimburse you for the value of the vase. You just have to agree on what it’s worth, and the representative has a scheme to figure it out. One of your fellow passengers, it turns out, had an identical vase, which is now identically smashed. Each of you will say what you think the vase is worth, between $2 and $100.  If the two prices are the same, the representative will assume you’re telling the truth. If the values differ, he’ll figure the lower value is accurate and reimburse each of you that amount — with a $2 bonus for “honesty” for the lower bidder and a $2 penalty for the higher one.

What will you say the vase is worth?

Being the greedy type, you first figure you’ll bid $100. But then a sneaky thought occurs to you. If you bid $99 instead, you’ll come out at least as well, no matter what your fellow passenger does. Here’s your reasoning: First suppose she bids $100. Then you’ll get $99 plus the $2 bonus — better than the $100 you’d get by bidding $100. If she bids $99, you’ll get $99 rather than the $98 ($100 less the $2 penalty for bidding higher than her). And if she bids anything less than $99, you’re stuck with her bid less the penalty, which is just the same as you’d have gotten for bidding $100.

You open your mouth to say $99… and then close it. You know who your fellow passenger is — you sat next to her on the flight. She is both wickedly smart and ruthless, and this analysis won’t have escaped her. She’s got to know that a $99 bid will get her at least as much money as a $100 one, so she definitely won’t bid $100. In that case, you’re better off bidding $98, for the same reason that $99 was better than $100.

But wait! She’ll know that too. Better bid $97… No, $96… No, $95…

Follow this reasoning, and your bid will spiral down to $2. And according to game theory, which assumes that all the players in a game are both perfectly rational and perfectly selfish, that’s just what you should do.

But that’s crazy!

Indeed, when economists have tested this scenario, called the Traveler’s Dilemma, on real people in the lab, the players hardly ever follow the game theory prescription. The average payment turns out to be not far from $100. Most people choose the maximum bid — even though $99 always brings in as much money, and sometimes more. The only situation when people played the “rational” strategy that game theory predicted was when the penalty and reward were made very large.

To find out if that’s just because people don’t do the analysis, a research team repeated the experiment using professional game theorists playing for real money. But even among game theorists, game theory failed: Nearly a fifth chose $100, and two-thirds chose $95 or higher. “Game theorists don’t believe their own theory,” says David Wolpert of NASA Ames Research Center, one of the researchers on the study..

It’s hardly surprising that people sometimes behave irrationally. The odd thing, though, is that in this case, irrationality makes people richer. So sometimes, it’s smart to be dumb.

Since Kaushik Basu of Cornell University created the Traveler’s Dilemma in 1994, economists have posited that people play the game as they do because they’re altruistic, or well-socialized, or just haven’t reasoned it out. The harder question, though, is how to fix up game theory so that it can explain and predict how people find and agree upon these better solutions.

Now Wolpert and his colleagues have come up with a solution, one that may have the power to resolve many other game theory paradoxes. They figure that people adopt different personae for different circumstances.

Here’s one example. Have you ever known anyone that you just knew was smart, but consistently acted ditzy? As a result, you have no choice but to treat the person like a ditz. And here’s the aggravating part: sometimes, precisely because you’ve adjusted your behavior to compensate for that person’s foolishness, the dingbat comes out better than you do! Wolpert says he can explain that using game theory. For certain games, ignoring what’s in your own apparent self-interest and choosing a play randomly can increase the expected pay-off—and can decrease that of your opponent. The player is adopting a “persona.”

The key thing for this to work is that you need to know which persona your opponent is using and know that your opponent is going to stick with it no matter what. If you know that your opponent really will choose randomly, then it may be in your interest to change your play in response.

When Wolpert and his colleagues applied this theory to the Traveler’s Dilemma, it explained the data about how real people play the game perfectly. If you know your opponent is going to be irrational and choose randomly, you’re best off playing rationally and bidding around $97 or $98. Most likely, you’ll bid higher than she does, so you’ll lose the penalty. But that’s better than bidding less than she does and losing the entire difference between your bid and hers.

Furthermore, as soon as you admit the possibility that your opponent might not play rationally, the $2 play becomes silly. Bidding $2 only makes sense when you and your opponent are certain that you’re each perfectly rational and selfish. If you’re not certain of that, Wolpert and his colleagues calculate that players will bid in the high 90s about three-quarters of the time—which is very similar to what experiments show.

The researchers then tried making the penalty and reward bigger than $2. As it grew, the optimal bid against an irrational opponent shrank. Eventually, it dropped all the way down to $2, just as in the experiments.

Wolpert admits that their model is crude in that it adopts only two rigid personae (random and perfectly rational), and he doesn’t claim that the model captures our conscious reasoning process as we play the game. But the model does manage to encapsulate the combination of rationality and irrationality that we expect in other humans. “Your experience with other human beings is that they’re not perfectly rational and, intuitively, this means they don’t go down 100 steps of ratcheting,” Wolpert says. “The average of being rational and irrational is to go down only a couple of steps.”

“It’s a very exciting new direction,” says Basu. Personae, he says, have the potential to shed light on many of the famous paradoxes in game theory. Wolpert and his colleagues have already successfully applied their theory to two other games, the Ultimatum Game and a generalized version of the Prisoner’s Dilemma.

More Stories from Science News on Math

From the Nature Index

Paid Content