Early on, Andrew treats arrival time as randomly distributed: your mean arrival time is 1:00 pm, with some chance of arriving early and some chance of arriving late. But later, Andrew treats your arrival time as a matter of choice: you choose to arrive at 1:10 pm because that maximizes your expected utility. The problem is that if I choose

*my*own arrival time, then I can also predict that you will choose

*your*own arrival time. It’s therefore incorrect for me to base my expected utility calculations on the assumption that your arrival time is randomly distributed with a mean of 1:00 pm. And the same goes for your calculations.

To make the model consistent, we should assume that what each individual chooses is the time to

*aim*at. The chosen time then becomes the mean of a random distribution. Thus, if you choose to aim for 1:10 pm, there’s some chance you’ll arrive “early” at 1:00 pm (the actual time we agreed upon), and some chance you’ll arrive late at 1:20 (which is

*especially*late relative to the agreed upon time). Performing the very same calculations that Andrew used to show that my utility is maximized by arriving (technically, aiming to arrive) at 1:10 pm, I can show that my utility is actually maximized by aiming to arrive at 1:20 pm. Why do our calculations differ? In Andrew’s approach, each chooser naively assumes the other guy is aiming for 1:00 pm, with any deviation resulting only from randomness. In my approach, you assume the other guy is doing the same self-interested calculation you are.

But the process doesn’t stop there. The other guy, predicting that I will aim for 1:20 pm, will rationally aim for 1:30 pm. And I can predict this, so I will rationally aim for 1:40 pm… Where does the process stop? The answer depends on how many rounds of double think we are willing to permit in our model. If we allow infinitely many iterations, there is no equilibrium to this game. Neither of us will ever show up. On the other hand, if we stop at (say) three iterations (each of us does the calculation three times), then we both arrive at 1:30 pm.

It might seem logical to limit the number of iterations, since human beings have finite cognitive resources. But if you just read the previous two paragraphs and understood them, then you’ve effectively just done an infinite number of iterations in shorthand form. So we’re back to no equilibrium (except, perhaps, an equilibrium that involves deliberate randomization). Alternatively, we might suppose that the utility figures change as the time gets later and later – I get hungrier and hungrier, and care less and less about companionship. As a result, there might be some time late enough that both players will aim for it, since the gains from eating sooner just barely outweigh the expected gain from avoiding any possible wait time. (In Andrew’s payoff matrix, that would mean the payoffs on the main diagonal are not all the same.) Making this conclusion rigorous, however, would require a more complex game theoretic model than I’m willing to devise right now.

## 5 comments:

This article is a very clear example of what I find disturbing about this type of analysis. In short, it misses the big picture. This big picture is called "ethics".

I arrive on time because I SAID I WOULD ARRIVE ON TIME. When I say lunch is at 12:30, I arrive at 12:30, barring some great catastrophe. If my partner is half an hour late, I leave. If they are chronically late, I don't have lunch with them anymore.

The result of this, over the long term, is that none of my lunch partners are ever late. I implicitly uphold standards of accoutability in my friends, and as a result I am rarely disappointed. This approach - which is obviously more optimal than the non-equilibrium gamesmanship that results in nobody ever having lunch - apparently transcends the power of game theory to analyze. And this is not a small point; when game theory is used to decide issues of economics or military strategy, too often the baby is being thrown out with the bathwater.

I believe that, in a sufficiently complete analysis, all the ethical principles we hold dear would be upheld. But it is not, in practice, possible to conduct such an analysis. Fortunately, we are endowed with an ethical instinct that allows us to proceed in social situations without the help of mathematicians, and to do so more effectively than their analyses would suggest.

-Tony

Tony -- You're right that ethics make a difference. But to me, the interesting question is why we have the ethics we do. Mathematical/game-theoretic analysis like this can help explain why ethical rules (i.e., social enforcement) are necessary in the first place, and why such rules are needed in some situations but not in others.

Glen: I think we are more or less on the same page. But with Rand's 100th birthday just past, I suppose I'm overly sensitive to arguments along the lines of "ethics doesn't match my rationalistic analysis, therefore ethics is wrong." Which, IMHO, is why Rand's philosophy is pretty much useless.

I think it's true that game theory plus evolutionary psychology is the "correct" foundation for ethics, insofar as they describe reasonable ground rules for what unfolds in actual societies. But, just as first-principles QM analyses of even simple systems (like the bulk properties of water) is fiercely difficult, if not impossible, it may be that game theory cannot bridge the gap between first principles and actual human behavior.

So long as one keeps this principle in mind, it is an interesting and often enlightening exercise to try to bring the two together. But the complexity of the situation goes far, far beyond the "more complex game theoretic model" you allude to in the last paragraph, and should not be underestimated!

I think you are absolutely right. I made a similar comment but as elegantly as you.

I've posted an update that attempts to address some of Glen's objections:

http://the-idea-shop.com/article/75/punctuality-is-inefficient-qed

Post a Comment