Let one person die a terrible and tortured death, but alleviate the headaches of billions of others by one second. ... If the billions are large enough in number, is this worth it? Or does the suffering of the lone individual hold special status?And Alex replies:
The clearest reason to think that we should trade a terrible and tortured death of one in order to alleviate the headaches of billions is that we do this everyday. Coal miners, for example, risk their lives to heat our homes and to generate the electricity that drives this blog. We know that some of them will die horrible deaths but few of us think that we are morally required to give up electricity.As several commenters (including me) point out, the analogy doesn’t really work. Unlike the (presumably unwilling) victim in Tyler’s hypothetical, coal miners voluntarily accept the risks and get compensated for doing so. Our willingness to allow miners to take such risks is not necessarily evidence that we’re all really utilitarians.
Alex’s coal miner story is still useful, however, because it highlights the way in which markets and voluntary exchange allow us to avoid unpleasant ethical dilemmas. Tyler’s hypothetical is a truly vexing problem; one person’s un-compensable suffering is required for great benefits. Fortunately, we don’t often encounter situations like that. Instead, we face situations like Alex’s, in which we can (at least ex ante) give the victims something in return for their sacrifice. Requiring their consent helps to assure that the benefits really are large enough to justify the costs; otherwise, those making the sacrifice would demand so much compensation that the rest of us would be unwilling to pay.
Even in those social situations where we can’t get true consent, as in the law of tort that deals with true accidents between strangers, we at least try to pay enough to the victims to make them “whole” (and simultaneously give the potential tortfeasors an incentive to weigh the benefits of their activities versus the costs).
None of this means that we do, or should, reject utilitarian ethics. It just means that we don’t always have to make tough utilitarian trade-offs, because it’s often possible to convert utilitarian gains into Pareto improvements.