Friday, January 03, 2003
Thursday, January 02, 2003
Posted by
Glen Whitman
at
12:01 PM
Adverb to the Wise
I asked my brother Neal (the linguist) if he knew of any nouns that have been turned into adverbs aside from shotgun and bitch, as discussed in a previous post. After pondering for a while, he observed that many nouns have become adjective-modifying adverbs in phrases like "/ice/ cold," "/bone/ dry," and "/rock/ hard." But he couldn't think of any others nouns that are verb-modifying adverbs as in the phrases "ride /shotgun/" and "ride /bitch/."Then I posed the question to my sister Ellen, who came up with commando, as in the phrase "go /commando/" (i.e., wear pants without underwear). This definitely seemed to fall in the same class, with shotgun and bitch, of verb-modifying adverbs. But that got Neal thinking, and he said there's a good argument that *none* of these are really adverbs, because their seemingly adverbial uses are too situation-specific. There's nothing you can do bitch or shotgun except ride (or sometimes sit); there's nothing you can do commando except go; etc. Arguably, then, we don't really have verbs with adverbs here - we just have verbs that happen to have spaces in them, in a manner similar to the verb "put up with."
That sounds right to me. But then again, there seem to be other adverbs that are at least awkward when paired with anything other than a specific verb. There are few things one would do fitfully except sleep, for example. So how many different verbs must a new prospective adverb be used with before it becomes a real adverb?
Sunday, December 29, 2002
Posted by
Glen Whitman
at
4:34 PM
Rawls and the Precautionary Principle
Sasha Volokh, Mark Kleiman, and Kieran Healy have had an interesting exchange about the justification, if any, for the so-called precautionary principle. In a nutshell, this principle says that if there is uncertainty about the possible negative effects of a new technology (or the equivalent), society ought to resist using it until its negative effects are known. (Sasha defines it as “the principle that we should avoid new technologies unless they're proven harmless,” but that may be a little overstated – it’s not that any harm at all is unacceptable, but that unknown risks of harm are unacceptable.) I’ve always found this principle problematic, in part because I think it bears a strong resemblance to John Rawls’s maximin principle, and it shares a common error: relying on an absurdly high degree of risk aversion.If the analogy seems strained, bear with me. Rawls’s political theory (and I should admit at this point that what I know of Rawls is largely second-hand, as I’ve only read bits and pieces of his actual writing) asks us to imagine ourselves behind a “veil ignorance,” where each person lacks knowledge about what position he will occupy in society. You don’t know, yet, whether you’ll be the best-off person, or the worst-off person, or someone in between. Behind the veil, the future members of society will negotiate to an agreement about what sort of society they will live in. Okay, so far so good; although some philosophical objections could be made to this construct, it makes enough sense for me to accept it for the sake of argument. The problem is Rawls’s conclusion: that people behind the veil of ignorance will accept the “maximin principle” (not Rawls’s term, but a commonly used name for it), which says to choose the form of society that maximizes the position of the worst off person.
It should be apparent that this is an incredibly risk-averse approach, because it focuses exclusively on the worst-case scenario. No increase in the wealth or happiness from any or all of the other positions in society can compensate for the slightest loss of wealth or happiness for the person at the bottom. If this principle were applied to investment decisions, investors would never make high-risk high-return investments, or even medium-risk medium-return investments, when low- or zero-risk investments were available. To me, it is far more plausible that people behind the veil of ignorance would be either risk neutral (assigning equal weight to all possible outcomes) or moderately risk averse (giving somewhat more weight to the worse outcomes).
My philosopher friends tell me Rawls has responded to this objection by saying that, behind the veil of ignorance, you cannot assume you have an equal likelihood of being each person in a society. In other words, if there are N people in a society, you can’t assume there’s a 1/N chance of being each one. Now, I have no idea how Rawls justifies this additional assumption, but it makes zero sense to me. I can’t imagine how he would justify any alternative probability distribution (e.g., double the weight on the worst-off, half the weight on the best-off) unless he wants the “probabilities” to represents a value-based weighting of outcomes instead of likelihood-based weighting. If so, then he’s building into the construct some of the conclusions that people behind the veil were supposed to arrive at themselves through negotiation. But my understanding is that Rawls is not positing some other distribution -- he’s rejecting the notion of people behind the veil of ignorance employing any kind of probability distribution *at all*. This, however, is impossible; as one of my grad school professors always put it, “You cannot have no beliefs.” You may have more or less justified beliefs, but lacking any beliefs at all leads to paradoxes like the letter-switching paradox.
(Note: On this page, the author gives a pretty good explanation of how to resolve the paradox, except that he gets the “moral of the story” wrong. You must have beliefs about the decision procedures of Johnny Moss’s ghost, even if you don’t know the procedure. Otherwise, the mathematical argument for always switching still works.)
So how does this all relate to the precautionary principle? In the Rawlsian scenario, the uncertainty is about “who you will be” in a future society. In the new technology scenario, the uncertainty is about which conceivable effects (positive and negative) the technology will actually have. In both scenarios, a sensible approach is to take into account all the possible outcomes with their associated probabilities, but the maximin and precautionary principles ask us instead to focus on the worst conceivable outcome.
In addition, both proponents of both principles try to justify their extreme risk aversion by saying that we lack a probability distribution. Rawls did this by simply asserting that we don’t have one behind the veil. The defenders of the precautionary principle do this by saying that so little is known about some new technologies that we don’t know what probabilities to attach to different outcomes. While it’s true that we sometimes lack good information, that doesn’t mean that we “have no beliefs” – it just means that we have less confidence in them.
My comments here are probably pretty unhelpful when it comes to the much more difficult question of how we *should* weight good and bad possible outcomes, given that most of us are somewhat (but not ridiculously) risk averse. My point is merely that the uncomfortable in-between space is where our actual decisions do, and should, take place.
Subscribe to:
Posts (Atom)