For a start, take Alex’s hypothetical, posed to him (and Robin Hanson) by a philosopher:
Suppose that you had a million children and you could give each of them a better life but only if one of them had a very, very terrible life. Would you do it?Alex and Robin, both economists, said “yes” without hesitation. But to the philosopher who posed the question, the intuitively correct answer was clearly “no.” Who’s right? Alex suggests that we should simply overcome our gut intuitions and think logically here, and the logical answer is to favor the greatest good. Will and Carina both correctly reply that Alex, too, is relying on intuition – the utilitarian intuition that we can (in some rough-and-ready way) compare satisfaction across persons. So we can never fully escape the appeal to intuition.
But does that observation dispose of the matter? Can we just pick out our favorite intuition and run with it? Alex is still right that our intuitions are inconsistent. One intuition tells us that individuals have personal claims that should never be violated. Fine. But another intuition tells us it’s absurd to impose monstrous losses for miniscule gains. The lifeboat situation is a classic example of where this alternative intuition kicks in. See my post on lexicographic preferences for more on this point.
Moreover, it turns out the intuition that answers “no” to Alex’s hypothetical is highly dependent on the frame of reference. Consider a variant on the story:
Suppose that you could give one child a substantially better life, but only if a million other children had a worse life. Would you do it?I’m guessing most people’s intuition says “no” on this question as well – thereby dictating the opposite result. The hypotheticals differ only because the language implies actively changing a status quo that – for some reason – we assume creates a rightful claim by one or more children. But in either hypothetical, you’re really being asked to choose between two situations:
A. One million children live relatively good lives, and one child lives a terrible life.Now what does your intuition tell you? Without a frame implying that A or B is the status quo, the answer doesn’t seem so obvious. Maybe I’m projecting, but I suspect most people would appeal to some form of utilitarian reasoning here (possibly asking questions about just how bad “terrible” is, what “so-so” is like, etc.).
B. One million-and-one children live so-so lives.
So apparently the frame matters to us. But where does this frame come from? Why do we privilege the presumptive status quo? Take Alex’s original hypothetical, but add in some extra context:
The million children who stand to gain are the starving and oppressed people of an African nation. The one child who stands to lose is the son of the local tyrant.Feel free to modify the context to explain why the tyrant’s son’s life will suck after his dad is dethroned. With the context filled in, I suspect most who said “no” before will say “yes” now. Why? Because we no longer think the status quo creates any rightful claims to its continuation.
Now, none of my reasoning thus far “proves” utilitarianism is the right moral theory. But the discovery of anti-utilitarian intuitions doesn’t disprove it, either. The question to ask now is, why do we care about the frame of reference? Why does it matter whether the society in question is a tyranny or something else? Once our intuitive commitment to particular frames is challenged, the hypothetical begins to look a lot like the A vs. B choice I posed above.
And just to be clear, we must challenge the frames, because that is the central question of ethics and political philosophy. What rights do people have, if any? What governmental and institutional structures are best? How should initial endowments be determined? Appealing to gut intuition is insufficient here, because the whole issue is which of our conflicting intuitions ought to be trusted, and under what circumstances. In this process, it’s useful to realize that not all intuitions are fundamental or axiomatic. Some have their origin in the interaction between more basic intuitions and facts about the world, such as the degree of scarcity or the typical costs and benefits of alternate courses of action. This is where utilitarian reasoning proves most useful. Now, that doesn’t mean that only utilitarian intuitions matter, but assuredly they ought to play a role – I’d recommend a relatively large one – in reaching our reflective equilibrium.