Friday, March 12, 2010

Peer Production and Transaction Costs

The concept of peer production is hardly new to me. Wikipedia and Linux, the most visible examples of peer production, long ago convinced me that intellectual products can be produced by means of a distributed network of individuals working largely without supervision or central control.

But until a couple of weeks ago, when I finally listened to Clay Shirky’s interview on EconTalk (recorded over a year ago), I had a difficult time fitting peer production into my mental models. It’s not that I didn’t understand peer production; it’s that I couldn’t quite integrate it with what I already knew about typical modes of production. Shirky, by invoking the work of Ronald Coase, finally let me put it together.

(Disclaimer: The following doesn’t necessarily represent Shirky’s view; it’s just my take on what he said. Also, I have not read Yochai Benkler's work on this subject. Finally, my conclusions will probably be obvious to some readers -- but to compensate, I promise an example drawn from my work on Fringe!)

The first step is to uncover an assumption hidden in most models of labor supply. The “typical” labor supply curve looks something like Figure 1. This curve represents an individual’s willingness to supply labor. As the figure shows, the individual won’t supply any labor until offered a wage that exceeds some minimum, denoted wo and dubbed the “reservation wage.” This reflects the notion that exerting effort is inherently unpleasant.

But what if labor is not inherently unpleasant? Then the labor supply could look something like Figure 2. Here, we see that even at a wage of zero, the individual willingly supplies some amount of labor. This is obviously true for a variety of activities; I blog for free, for instance. Denote this minimum Lo and dub it the “free labor supply.”

But that’s not the end of the story. Most worthwhile projects will require more labor than any one individual will provide for free. To complete these projects without paying wages, you need to assemble the free supply of many, many individuals. Prior to the modern information age, this certainly happened; think barn-raisings and charity projects. To do it, however, you usually had to get people together in the same place, requiring both transportation and physical space. Information technology has dramatically reduced these transaction costs -- specifically, the costs of coordinating team production. It’s now possible to assemble the free labor supply of thousands of people at much lower cost than before.

Thinking of peer production in this way helps to understand its limits. What kind of projects can be done by this method, and which can’t? First, we need to have the kind of project for which people have labor supply curves like Figure 2 -- that is, for which people willingly supply free labor.

Second, the project’s other transaction costs must be sufficiently low. Coordination costs include not just drawing laborers together (physically or virtually), but also making sure their separate efforts mesh properly. The pieces have to fit, so to speak. And this puts a premium on modularity -- the capacity of a task to be broken up into pieces that can function, at least to some degree, on their own. Wikipedia is a nice example: an error or conflict within a single entry (say, Walmart) does not inhibit ongoing work on another entry (say, Target or quantum field theory). I don’t know enough about software programming to give examples there, but my understanding is that it has similarly modular features.

Not every project is of this nature. To take an example from my current livelihood: it’s awfully difficult to write a script in a distributed fashion. Anyone who’s read the results of a tandem writing assignment knows this. Every now and then, time constraints in the Fringe writing office will require us to “gang-bang” a script (yes, that’s actually what we call it): acts and scenes must be divvied up among all the writers to get the script written faster. But this only works because a detailed overall outline has already been written, either by an individual or by a group working in concert.

Moreover, once the individual pieces have been stitched together, the combined script is typically a hairy mess. The tone is inconsistent; some bits of necessary information have been duplicated across scenes; other necessary components have fallen through the cracks. To make the script coherent, a single writer or pair of writers (usually the writer(s) of record on the episode) must go through the script revising, reworking, and rewriting substantially. As Warren Buffett wryly notes, you can’t make a baby in one month by getting nine women pregnant; the same is true of a script. In some cases, I’ve seen a gang-banged script take longer to write than a regular script.

To summarize, peer production works because information technology has reduced the transaction costs that previously had prevented the coordination of large amounts of free labor provided by many individuals. Thus, low transaction costs are the key to peer production. But some kinds of transaction costs remain high, especially for projects that cannot easily be made modular; for those projects, peer production is still not a viable option.

1 comment:

Ran said...

As a programmer, I feel the need to clarify that the original Linux kernel was created by a single programmer (Linus Torvalds), and most of the common shells and utilities you'll see on Linux machines are actually GNU utilities created in a more cathedral-like fashion. Likewise, individual flavors of Linux were usually spun off by (initially) very small groups.

I think this is true of most software that has many contributors: one person, or a very small team, started it and created the framework for later contributions. After a while, you can look back and see that the original programmer's/s' contribution is a small part of the current whole, but the project could never have gotten off the ground if it hadn't been made coherent first.

If you look at Wikipedia from a certain standpoint, you see a somewhat similar thing there, in that Jimmy Wales wrote functional wiki software (since improved by many other developers) that made it possible for all these other contributors to add and edit articles. Without that key initial invention, none of the rest could have followed.

Your "gang-banging" approach does something similar — it starts with the writing of a detailed outline, and then many hands flesh out the outline. So, why doesn't it work? I'm just speculating here, but maybe the initial contribution isn't enough to start a coherent project. It might work better if the initial creator also wrote a few of the scenes — maybe the first non-throwaway scene, a scene about one-third of the way through, a scene about two-thirds of the way through, and the last non-throwaway scene. These scenes don't have to be perfect, but if they're enough to make clear the intended tone of the episode, then they might help.

There may also be ways to make the final redaction a bit more collaborative; for example, specific changes (e.g.: scene X needs to mention, or not to mention, thing Y) can be re-delegated.

… none of which is to suggest that this is necessarily worthwhile. Part of the beauty of Linux and Wikipedia is that they're still growing, still improving, with no end in sight. Obviously a Fringe episode can never make use of long-term, wide-scale collaboration, what with its fixed size and deadline; and it's not obvious to me that short-term, narrow-scale collaboration would have any of the same benefits.