Friday, April 23, 2004

Black Is White, Up Is Down...

As a result of the California state budget crunch, the CSU system has taken some hits. Its funding from the state has been reduced, and fees – i.e., tuition – have had to increase. Naturally, the California Faculty Association (the teachers’ union of which I am an unwilling member) vigorously opposes all this. I received a CFA flyer in my mailbox this week announcing a rally against cuts, and one of its slogans has been sticking in my craw for a couple of days now: “Fee Increases ARE Tax Increases.”

That’s not just an overstatement; it’s about as nearly opposite the truth as I can imagine.

First, taxes are involuntary payments, usually made irrespective of whether the payer receives any service in return. Fees, or at least these fees, are voluntary payments (you only pay them if you attend a CSU institution), and the payer is the direct recipient of the educational benefits.

Second, CSU students receive a massive subsidy. According to some estimates, it costs around $10,000 per academic year to enroll one student full time, whereas the fee is about $2400. So the typical student is receiving a subsidy of about 75% off the cost of education. A fee increase is not a tax; it’s a reduction in the size of the subsidy. When people say student fees have increased by 58% in recent years, they’re referring to the fact that fees used to be around $1500 and have risen to about $2400. To put that in perspective, the size of the subsidy has dropped from about 85% to about 75%. (Incidentally, these figures don’t take into account inflation. In real dollars, the percentage increase would be a good bit less than 58%.)

Third, for the actual taxpayers, a fee increase is an alternative to a tax increase. If the fees did not increase, the only way CSU could keep its funding would be through increased taxes or borrowing (and of course, borrowing just means more taxes from future taxpayers). Raising fees is thus a way to avoid taxing the rest of public – you know, all those people who are not getting a heavily subsidized education – even more.

Read More...

Thursday, April 22, 2004

Are You Looking for Purpose in Your Life?

Then look no further. (Link courtesy of Bryan Westhoff.)

Read More...

Brain Politics

Tyler Cowen links to a fascinating article about how the brain reacts to stimuli of a political or ideological nature.

At the start of the session, when they look at photographs of Mr. Bush, Mr. Kerry and Ralph Nader, subjects from both parties tend to show emotional reactions to all the candidates, indicated in the ventromedial prefrontal cortex, an area of the brain associated with reflexive reactions.

But then, after the Bush campaign commercial is shown, the subjects respond in a partisan fashion when the photographs are shown again. They still respond emotionally to the candidate of their party, but when they see the other party's candidate, there is more activity in the rational part of the brain, the dorsolateral prefrontal cortex. "It seems as if they're really identifying with their own candidate, whereas when they see the opponent, they're using their rational apparatus to argue against him," Professor Iacoboni said. [emphasis added]
The research is really too preliminary to reach any political conclusions; as Tyler notes, there were only 11 data points. But what the heck – what’s a blog for, if not for throwing out half-baked ideas? Here’s what I’m thinking: this is yet another argument for divided government. When one party controls both the executive and legislative branches, the members of the party respond emotionally to most policy proposals, and nobody with power is thinking rationally about them. With divided government, there will always be someone in power who will think rationally about the other side’s proposals. This is, perhaps, why the Republicans only discover their limited government principles when there’s a Democrat in the White House.

Read More...

Wednesday, April 21, 2004

If You Turn the Gun on Yourself, Do You Have to Shoot It?

I always pause and think for a second when I read or hear news stories that contain sentences like this one:

(1) He killed 3 people before turning the gun on himself.
Those of you who know me personally may be guessing that my reaction is, “I didn’t know you could turn a gun on! I’ve never seen a power switch on one!” Well, you’re right, but that’s not what I want to talk about here. Another reaction, of course, is disgust with the actions of the killer, but I don’t want to talk about that either.

I’m wondering why it is that the expression to "turn a gun"on oneself always seems to imply that the gun wielder also (a) fires it, (b) hits the target, and (c) dies from the wound. I mean, Dad always told me, “Don’t point a gun at anyone unless you are planning to kill him,” but not everyone follows this rule. Plenty of people turn guns on people (including themselves) and only threaten to fire them. And even those who do follow the rule, and turn guns only on those they mean to kill, can still miss. Furthermore, even those who hit their target (self or otherwise) might fail to deliver a fatal shot. I know that turned the gun on himself doesn’t have to mean “killed himself,” because I can say this:
(2) He turned the gun on himself, but decided not to fire it.
I did a Google search for “turn the gun on”, “turned the gun on”, and “turning the gun on” and examined the first 3 pages of hits for each search. Of the 60 relevant examples I found, 53 of them (96%) were followed by a reflexive pronoun (himself, herself, etc. –mostly himself), and of these 53, only two were not clear cases where turning the gun on oneself meant killing onself. Even in these two cases, though, the gunmen actually did kill themselves, but that detail was made explicit later on, instead of being implied by the turn the gun phrase. Here they are:
(3) Gonzalez then turned the gun on himself and committed suicide.
(4) Deculit turned the gun on himself. "He blew his brains out," Witkowski said.
Of the 7 examples that mention turning the gun on someone other than oneself, 5 of them mean killing that someone. And as with the turn-gun-on-self cases, in the remaining two turn-gun-on-someone-else examples, the gunmen actually did kill their victims, but that fact was made explicit elsewhere.

So overall, then, if you read that someone turned a gun on someone , chances are near 100% that the second someone got killed. OK, so far so good. Now how about this:
(5) He killed 3 people before pointing the gun at himself.
Now the killer probably survived, maybe surrendering, or being disarmed by the police. Why doesn’t this sentence imply the same thing as (1)? Well, I have one pretty good reason. Larry Horn talks about this kind of situation as the Division of Pragmatic Labor. As it applies here, we’re used to hearing turn the gun on as a synonym for “shoot and kill,” so if someone deliberately chooses a less common phrasing, the audience infers that the speaker is trying to convey some message that would not be conveyed simply by using the more common phrasing. In this case, the message is, “He only pointed the gun; he didn’t actually fire it.” But now my question is: Why did turn the gun on become the more common way of expressing the idea of “shoot and kill”, while point the gun at did not? I’m assuming it was just a random thing, but if someone knows different, I’d be interested in hearing about it.

Oh, by the way: I also learned during the Google search that you can turn a gun on, as demonstrated in examples like this:
(6) If you simply turn the gun on, it will automatically default to standard semiautomatic mode.

Read More...

Tuesday, April 20, 2004

The Trade-Offs of Style

Many of the rules of grammar and style you learned from your high school English teachers are pointless and stupid – a point the folks at Language Log have made repeatedly. I don’t oppose all linguistic prescriptivism (see here and here), but there’s no doubt that many of the most common prescriptions are ridiculous. One example came to mind while I was grading some student essays. I noticed a significant fraction of students insisted on using constructions such as, “A regression analysis was performed…,” and “Conclusions were reached that….” Why not just say, “I performed a regression,” and “I conclude that…”? Because somewhere along the line, they had English teachers who told them never, ever to use the first person. As these examples show, following that admonition strictly means violating another English teachers’ admonition, which says never to use the passive voice.

Both prescriptions serve a legitimate goal. Students permitted to use the first person will often litter their writing with I’s, even where they’re totally unnecessary – e.g., “I found an article by John Smith arguing that…” instead of “John Smith argues that….” And students permitted to use the passive voice will use it to avoid stating who did what – e.g., “Mistakes were made” instead of “Administration officials made mistakes.” But it’s nearly impossible to follow both rules strictly without writing some incredibly awkward sentences, which is why both should be regarded as rules of thumb. I don’t recall a single one of my English teachers admitting that. I suspect the problem stems from one or more of the following: (1) Some English teachers don’t understand the rationales for the rules. (2) English teachers who understand the rationales don’t think students will understand them. (3) English teachers need hard-and-fast grading rules, lest their marks appear arbitrary. I’m sympathetic to this last reason, which is one reason I’m much happier grading math than writing. But if you’re going to teach English properly, there’s no avoiding the burden of subjectivity in grading.

Addendum: The previous sentence was supposed to be the last, but then I noticed it began with the word “but.” That’s another English-class no-no that I break without remorse. If I didn’t break it, a lot more of my sentences would be run-ons. See? Another trade-off. D’oh! That was a sentence fragment…

Read More...

Sunday, April 18, 2004

Game Show Theory

A couple of years ago I was a contestant on Win Ben Stein’s Money. I didn’t win, but my name got put into the producer’s file of “good game-show contestant types.” As a result, I’ve been asked a couple of times since then to audition for other game shows. A couple of weeks ago, I auditioned for an upcoming show called “On the Cover,” and in so doing I figured out some interesting things about the selection process.

In both shows’ auditions, the first step was a 30-question quiz. After the quiz had been graded, the producer read the names of those who had “passed,” and everyone else was dismissed. But neither time did he reveal the number of correct answers that were required in order to pass. During the “Ben Stein” audition, I wondered why not. During the “On the Cover” audition, the reason became clear. Before giving the quiz, the producer explained to the participants why cheating on the quiz (say, by looking at your neighbor’s answer sheet) would be a bad idea. Show contestants, he said, are matched against others with similar scores on the quiz. If you cheat, you might increase your chance of being on the show, but only by increasing your chance of getting trounced in front of a national audience.

And that’s when I realized the reason for the secret pass-bar: If people knew the pass-bar, they might deliberately miss questions in order to get matched against inferior contestants. If the pass-bar were 20, for instance, and if you knew the answers to at least 25 questions, you might deliberately miss four (or if you're really confident, five) of them. The secret pass-bar makes it more difficult to game the system that way. (Though not impossible; if you knew the answers to all 30, you might guess you could safely give wrong answers to two or three, because it’s hard to imagine the pass-bar being set that high.)

But why should the producers care whether people end up mismatched? The obvious reason is that a close game is more interesting to watch. The more important reason, I suspect, is monetary. In a game where a single person has a chance to win a large prize while the runner-ups get a pittance (as was the case on “Ben Stein” and will be the case on “On the Cover”), matching the best contestants against each other reduces the expected cost of prizes, because only one of them can go home with the big prize. Matching the worst contestants against each other increases the number of episodes in which the big prize is not awarded.

As an added bonus, “departing contestants” like me can console ourselves with the knowledge that we were pitted against known equals (or near-equals), not just a random draw from the contestant pool.

Read More...