For a start, take Alex’s hypothetical, posed to him (and Robin Hanson) by a philosopher:
Suppose that you had a million children and you could give each of them a better life but only if one of them had a very, very terrible life. Would you do it?Alex and Robin, both economists, said “yes” without hesitation. But to the philosopher who posed the question, the intuitively correct answer was clearly “no.” Who’s right? Alex suggests that we should simply overcome our gut intuitions and think logically here, and the logical answer is to favor the greatest good. Will and Carina both correctly reply that Alex, too, is relying on intuition – the utilitarian intuition that we can (in some rough-and-ready way) compare satisfaction across persons. So we can never fully escape the appeal to intuition.
But does that observation dispose of the matter? Can we just pick out our favorite intuition and run with it? Alex is still right that our intuitions are inconsistent. One intuition tells us that individuals have personal claims that should never be violated. Fine. But another intuition tells us it’s absurd to impose monstrous losses for miniscule gains. The lifeboat situation is a classic example of where this alternative intuition kicks in. See my post on lexicographic preferences for more on this point.
Moreover, it turns out the intuition that answers “no” to Alex’s hypothetical is highly dependent on the frame of reference. Consider a variant on the story:
Suppose that you could give one child a substantially better life, but only if a million other children had a worse life. Would you do it?I’m guessing most people’s intuition says “no” on this question as well – thereby dictating the opposite result. The hypotheticals differ only because the language implies actively changing a status quo that – for some reason – we assume creates a rightful claim by one or more children. But in either hypothetical, you’re really being asked to choose between two situations:
A. One million children live relatively good lives, and one child lives a terrible life.Now what does your intuition tell you? Without a frame implying that A or B is the status quo, the answer doesn’t seem so obvious. Maybe I’m projecting, but I suspect most people would appeal to some form of utilitarian reasoning here (possibly asking questions about just how bad “terrible” is, what “so-so” is like, etc.).
B. One million-and-one children live so-so lives.
So apparently the frame matters to us. But where does this frame come from? Why do we privilege the presumptive status quo? Take Alex’s original hypothetical, but add in some extra context:
The million children who stand to gain are the starving and oppressed people of an African nation. The one child who stands to lose is the son of the local tyrant.Feel free to modify the context to explain why the tyrant’s son’s life will suck after his dad is dethroned. With the context filled in, I suspect most who said “no” before will say “yes” now. Why? Because we no longer think the status quo creates any rightful claims to its continuation.
Now, none of my reasoning thus far “proves” utilitarianism is the right moral theory. But the discovery of anti-utilitarian intuitions doesn’t disprove it, either. The question to ask now is, why do we care about the frame of reference? Why does it matter whether the society in question is a tyranny or something else? Once our intuitive commitment to particular frames is challenged, the hypothetical begins to look a lot like the A vs. B choice I posed above.
And just to be clear, we must challenge the frames, because that is the central question of ethics and political philosophy. What rights do people have, if any? What governmental and institutional structures are best? How should initial endowments be determined? Appealing to gut intuition is insufficient here, because the whole issue is which of our conflicting intuitions ought to be trusted, and under what circumstances. In this process, it’s useful to realize that not all intuitions are fundamental or axiomatic. Some have their origin in the interaction between more basic intuitions and facts about the world, such as the degree of scarcity or the typical costs and benefits of alternate courses of action. This is where utilitarian reasoning proves most useful. Now, that doesn’t mean that only utilitarian intuitions matter, but assuredly they ought to play a role – I’d recommend a relatively large one – in reaching our reflective equilibrium.
2 comments:
Glen says:
"But in either hypothetical, you’re really being asked to choose between two situations:
"A. One million children live relatively good lives, and one child lives a terrible life.
"B. One million-and-one children live so-so lives."
I think this is a poor gloss of the earlier hypothetical. The example was that one state of affairs is causally connected to another. Giving the one child a terrible life is what CAUSES the million good lives. The one thing is the COST of the other.
This kind of example raises an important issue, which is that this kind of ridiculous thought experiment is where our intuitions are almost sure to be weakest and most fallible. Our ability to dynamically determine, say, when it would be unfair to keep talking rather than letting someone else have a turn is generally a matter of intuition. And our intuitions about this sort of thing tend to be so reliable that the quality of our judgments and the way they contribute to smooth social coordination is almost invisible to us. (Or, it is glaringly obvious when someone has poor judgment and insists on unfairly hogging the floor.) But it should not be surprising that our capacities of moral judgment often fail us when considering counterfactuals that we will never face.
Part of a moral theory in wide reflective equilibrium is a theory of the conditions for the reliability of the moral capacity, so that moral judgments can be properly weighted and prioritized in cases of incompatibility.
The disagreement between Temkin and Robin and Alex simply shows that intuition can be conditioned by training. My own view is that economists' training tends to screw up their inuition in certain kinds of cases, which is not the same thing as transcending intuition. And philosophers who are attempting to simply systematize "natural" moral intuition almost always go awry when applying moral judgment to phenomena of large-scale social interaction that our untutored capacities are ill-equipped to handle. Economists are good at this stuff, but then inappropriately generalize and/or reify their methodologically useful tricks and heuristics.
Will: There's cause-and-effect even in my gloss of the situation. If the only options are A and B, and you choose A, then you cause not-B. (If A would have happened by itself, then your inaction causes not-B.)
I presume the reason you think cause-and-effect matters is that you care about the frame of reference -- that is, you assume the status quo ante has some preferred position that you're causing to be altered. But as I said in the original post, you can't just take the status quo (or frame of reference) as given -- first, because additional context may clarify that the status quo has no special claim to correctness after all; and second, because the big ethical/political question is what sort of frames we ought to accept as valid in the first place.
I do agree, however, that weird hypotheticals like these are likely to be where our intuitions are weakest.
Post a Comment