Tuesday, September 21, 2004

Newcomb's Paradox: Being and Doing

I just encountered a decision problem that I’d never heard before, though I’m guessing philosophers probably know all about it. I found it in a book by psychologist George Ainslie, who quotes it from philosopher Robert Nozick, who credits it to physicist William Newcomb. Here is my paraphrasing of the problem:
A Powerful Being with the perfect ability to predict your choices sets two boxes in front of you. In Box A, there is $1000. In Box B, there is either $1 million or nothing. You have two options: either (1) collect the contents of both boxes, or (2) collect the contents of Box B only. If the Being predicts that you’ll choose (1), he puts nothing in Box B. If he predicts you’ll choose (2), he puts $1 million in Box B. The Being fills the boxes before you make your choice, but again, he can perfectly predict what you’ll do. So what do you do?
I won’t try to discuss everything that’s brain-twisting about this story, such as its implications for free will and determinism. I’ll just point out the paradoxical part. In choosing, you have to ask yourself, “What kind of person am I?” And no matter what you answer, you’ll want the answer to be something else at some point in the process. If you are a Box-B-Chooser, then the Being will put money in both boxes, and then you’ll wish you could become a Both-Boxes-Chooser after the Being fills the boxes (because $1 million + $1000 is better than $1 million). If you are a Both-Boxes-Chooser, then the Being will put money only in Box A, and then you’ll wish you could become a Box-B-Chooser before the Being fills the boxes (because that will cause the Being to put $1 million in Box B after all).

As fun as that problem is, I also like this simpler variation, which is actually my original (mistaken) interpretation of the story. (I don't know if Nozick or Newcomb considered it; Ainslie did not.) In this version, the Being is somewhat more malevolent:
A Powerful Being with the perfect ability to predict your choices sets two boxes in front of you. In Box A, there is $1000. In Box B, there is either $1 million or nothing. You can choose Box A or Box B, but not both. If he predicts you’ll choose Box A, he puts $1 million in Box B. If he predicts you’ll choose Box B, he’ll put nothing in Box B. The Being fills the boxes before you make your choice, but again, he can perfectly predict what you’ll do. So what do you do?
Here, the paradox is more straightforward, because there’s no difference in when (before or after the Being fills the boxes) you want to change who you are. If you are Box-A-Chooser, the Being puts $1 million in Box B, and you’ll therefore want to become a Box-B-Chooser after the boxes are filled (to get $1 million instead of $1000). If you are a Box-B-Chooser, the Being puts nothing in Box B, and you’ll want to become a Box-A-Chooser after the boxes are filled (to get $1000 instead of nothing).

My brain hurts.

8 comments:

Anonymous said...

So what does the Being do if it predicts that you'll make your choice based on a flipped coin?

On an unrelated note, is there some way to disable or eliminate the whole "sign in to post a comment" thing? Signing in is a pain (hence this 'anonymous' post).

Jason B.

KipEsquire said...

Maybe I'm missing something, but how is this different from any plain vanilla prisoners' dilemma problem? Just seems like you're tweaking the nomenclature (i.e., going from a powerful being manipulating two players to a powerful being becoming one of the players). But the game-theoretic implication is the same, no?

"Thank God I'm only watching the game...controlling it!" --One Night in Bangkok

Glen Whitman said...

It's not the same as the PD, for a couple of reasons. First, one of the players is a perfect predictor of the other's action, and that player plays by a fixed rule. Second, the game is sequential (the Being acts first, and then you choose), whereas the PD is simultaneous. Third, there's no dominant strategy as in the PD game. Fourth, the PD game involves no paradox -- an unfortunate conflict between individual rationality and social efficiency, but no paradox. Here, there is a paradox of choice -- whatever you do, you'll wish you could have done (or were the type of person to do) the other thing.

If I were going to pick a canonical game this one is most similar to, I'd pick Matching Pennies or One-Two-Three Shoot. But even with those, the implications are not the same, because the perfect-predictor Being mucks up the usual logic.

Jason -- even if I flip a coin, I'll have to decide whether to follow its outcome. The Being can perfectly predict whether I'll do so!

Anonymous said...

Can't you avoid the metaphysical muckymuck if you pose the problem as having to write a (deterministic) computer program that makes the choice, and the Powerful Being is allowed to read the program before you run it? Although the program would be trivial in this case, it seems to avoid all the complications of "choice" and connects the game to plausible, real situations.

I guess I don't see why this game is interesting. It's simply a game where you always lose, because in effect you can't conceal your choice. Isn't it equivalent to the situation where you have to make and reveal the choice before the Poweful Being makes his? What does the mind-reading angle add to the problem?

Anonymous said...

There is a tension.

The tension is between two thoughts:

1) The Powerful Being's (PB) action lies in the past, and therefore does not depend on what I actually do. It is worth noting that this is how the world really works, and therefore the intuition is rightly strong.

2) The PB knows perfectly what I will do, and therefore what he has done indeed depends (if not causally, then logically) on what I will do. If I will do A, then PB has already done X. If I will do B, then PB has already done Y. This is introduced into the story by authorial fiat and the world does not work this way.

So there is a strong tension between what we know from all our life experience to be the case, and what we know from life to be not the case but is declared to be the case in the story.

In real life there are only two ways to know the future:

1) To observe it (in which case it must by then be the past)

2) To calculate it separately (i.e., to model it). But since calculations depend on knowing all the inputs, which can't be known in normal cases like predicting a person's decisions, then there is no way in this universe perfectly to calculate the ordinary future. The only way really to know the ordinary future, i.e. the future outside of strictly controlled experiments where unknowns are kept to a minimum, is to wait around for it to happen, and observe it.

The thought experiment is physically impossible even if not logically contradictory. But if it is physically impossible, then it should be allowable to question the physical declarations of the story, in particular, the declaration that the PB's actions really do lie in the past. The author declares that they lie in the past, but since we have abandoned physical reality, what is the past? The story abandons physical reality and declares that the PB's actions depend perfectly on our own actions, and moreover are unknowable to us until we act. Huh - just like the future. The only thing about the PB's action that lies in the past is the author's declaration that it lies in the past. But the key elements of the action (that it is caused by our action and is unknowable to us until we have already acted) point to the conclusion that the PB's action really lies in the future.

But once we allow that the PB's action really lies in the future despite what the author says, then the paradox of it disappears. The paradox of it therefore depends on the authorial declaration that the PB's action lies in the past. Its being in the past plays no real role in building up the story. The PB's action doesn't affect our choice about which box to open, since we're unaware of it, so as far as the story is concerned, it could equally well lie in the past or in the future of our decision. The only role played by the authorial declaration is to create the paradox in the reader's head. And it does this by playing on our intuitions of what the past is, on what it means for something to lie in the past. One thing that it means for something to lie in the past, is that it does not depend on what we do now. That is a fundamental intuition. And the intuition blatantly contradicts what is explicitly declared in the story, i.e., that it does indeed depend on what we do now.

So we might view the story as surreptitiously slipping in a blatant contradiction, one idea stated explicitly, and the other, contradictory idea, embedded in the detail of chronology.

The dilemma that the hypothetical chooser is faced with simply does not and cannot arise in the real world, because it depends on the idea that what we choose now will affect what some omniscient PB will have done in the past, and that therefore we need to choose carefully in order that the PB will have done what it is that we want him to have done. That occasion simply never arises.

Anonymous said...

I must be missing something in reading the problem.

I see 2 situations: you know how the Powerful Being operates, or you do not know how the powerful being operates.

If you know how the Powerful Being operates you'll always choose Box B for the $1 million.

If you don't know how the Powerful Being operates, you'll always choose Box A+B (why lose the $ in Box A?).

Walter

Anonymous said...

If the higher being is in fact a PERFECT predictor of what you will choose, and there is only one action he will take for each choice you make, there is one and only one outcome for each choice you make.

On other words, the problem can be simplified as follows:

You have a choice between two boxes, box C and box D. Box C always ontains $1000, box D always contains $1mm. Which should you pick?

And the solution is then extremely obvious...

Blar said...

Here's one variation on the problem that might drive home the paradox for some people: suppose that this whole choice scenario is happening on stage in front of a live studio audience. And, each box has one transparent side, which is facing the audience, so that they can all see how much money is in each box. However, the person making the choice cannot see the audience or receive any communication from them - he's still in ignorance.

Now, if you're in the audience looking at the money and the Chooser, it would be reasonable for you to think "Well, I see how much money is there. He might as well take all of it." This corresponds with a very sensible and intuitive way of thinking about decisions. The world is how it is, and my decisions can't change the past, so if I know that some course of action (choosing both boxes) will have the best result for me, regardless of which state the world is currently in (Box B empty or with $1 million), then I should choose that action. Choosing both boxes is a dominant response, because it'll net you the most money possible given the current state of the world, regardless of how the Being has distributed the cash.

The paradox is that there's another very sensible and intuitive way of thinking about decisions. You look at each possible course of action that you could take, you estimate what the likely results of each action will be, and you choose the action with the best likely consequences (like the one that maximizes expected utility). In this case, you're pretty confident that if you choose both boxes, you'll end up with $1000, and if you choose only Box B, you'll end up with a cool million. This is true even if the Being is only a very good predicter, and not perfect. So take Box B and join the millionaires' club.

Even though a lot of people seem to have a firm intuition about this problem, one way or the other, their beliefs generally aren't as fixed as they think. If you're a two-box chooser in the original scenario, what would you do if Box A had $2 and Box B either $0 or $1,000,000? If you're a Box B Only kind of person, what would you do if Box A had $999,998 and Box B either $0 or $1,000,000? Are you really so confident in your reasoning that you'd stick with it when it will only net you two bucks if it's right and it could cost you a million dollars if you're wrong?