When the Internet first arose, it served primarily as a medium for person-to-person communications, such as e-mail and file transfers. Although some forms of mass communications, such as newsgroups and electronic bulletin boards, did exist on the early Internet, they represented a relatively small proportion of overall Internet traffic. The nature of the Internet underwent a fundamental change during the mid-1990s. The privatization of the Internet backbone and the concomitant elimination of the commercialization restrictions triggered an explosion of mass media web content.
The emergence of the Internet as an important medium for mass communications effected an equally important shift in the importance of Internet intermediaries, both in terms of helping end users filter out bad content and in helping them identify and obtain access to good content. In addition, the literature on the economics of intermediation underscores how intermediaries can play key roles in helping end users obtain access to the content they desire. Together these insights demonstrate that intermediation should not be regarded as a necessary evil, as some commentators have suggested. On the contrary, intermediation can play a key role in helping end users obtain access to the content and applications they desire.
This passage, and the rest of the paper, elides one of the most fundamental distinctions in the Internet’s architecture: that between routers and servers. To recap, a router is a computer that transmits other computers’ packets from one place on the network to another—it generally is not the destination of packets transmitted by network endpoints. In contrast, servers are network endpoints that provide services to end users. These can include web servers, mail servers, instant messaging servers, gaming servers, and so forth.
Yoo lumps these two very different types of computers together under the category of “intermediaries,” and then proceeds to argue, in essence, that because servers often exercise an editorial role, packet filtering by routers cannot be censorship. This is roughly equivalent to arguing that because Newsweek prints some letters to the editor and declines to print others, it isn’t censorship if the US Postal Service refuses to deliver magazines it doesn’t approve of.
To get more concrete, it’s obviously true that Google exercises a certain amount of editorial discretion when it develops its search algorithms. But it’s a huge leap to equate Google’s editorial function with packet filtering by a large ISP such as Comcast. And the reason goes back to the difference between servers and routers. Routers and servers play fundamentally different roles in the Internet’s architecture. If you read Yoo’s paper carefully, you’ll find that virtually all the examples he cites of “good” content filtering is done by servers, while most of the commonly-cited examples of potentially-harmful behavior by service providers is done by routers. So, for example, Yoo mentions spam. While there are a few ISPs that take limited anti-spam measures at the network level (such as blocking port 25), the vast majority of spam-filtering takes place at network endpoints: either on mail servers or end users’ computers. And this is a good thing! I get my email service from parties other than my ISP, and I already have a spam filter I’m happy with. It would make me angry if my ISP started intercepting my communications with my email provider in a misguided effort to “help” me control my spam.
The reason the distinction between routers and servers matters is because the costs of switching ISPs (routers) are much higher than the costs of switching online applications (servers). Most users will be customers of just one ISP at any given point in time, but they’ll interact with dozens, if not hundreds, of different network endpoints. Right now, for example, I’m a Comcast subscriber. Using my Comcast subscription, I access about 150 RSS feeds, 25 podcasts, 4 email accounts, two instant messaging accounts, a Twitter feed, and dozens of websites on any given day. Because the cost of choosing online services is so low, I’ve been able to assemble a group of online services whose editorial decisions collectively satisfy my own idiosyncratic preferences for online content.
This would be much harder to do if filtering were commonly done at the ISP level. I have only a handful of ISPs to choose from, and I’m only going to want to be a customer of one ISP at a time. If ISPs commonly filtered their customers’ traffic, it’s extremely unlikely that I would be able to find an ISP that had exactly the filtering strategy I preferred. So it’s in the interest of me and every other Comcast customer for Comcast to pursue a lowest-common-denominator routing strategy, delivering all packets and letting end-users decide what to do with them. When an ISP fails to deliver a packet a user has requested, I don’t think it’s at all unreasonable to describe that as “censorship” rather than “editorial discretion.”
Yeah this is a tough one. My natural inclination is to say that ISP companies should be able to run their business however they please; if customers want open network access, companies that provide it will gain market share and everything will work out.
But the ISP market isn’t exactly an openly competitive one. In many areas, there is essentially one single option for high speed internet. So it isn’t like customers who are displeased with packet filtering could drop their ISP and sign up with a company that provides an open net. They’re stuck between a packet-filtered service, or no service at all.
I’m not sure what the answer is. I love me some interwebs, and absolutely hate the idea of filtering. But I’m loathe to support the idea of requiring ISP’s to manage their pipes in any specific way. So what do we do?
I’ve been meaning to read Tim’s Cato paper on forming an open internet without “neutrality” laws, but I haven’t gotten to it yet (only so much time can be wasted at work, after all). Is it reasonable to ask for a short summary to provide some context here?
Thanks Tim. Great analysis, though I’m also personally a little uncomfortable with the idea of considering servers to be vested with a right to editorial discretion. Doesn’t that undercut the DMCA Safe Harbor by a lot?
I think there has to be a good middle ground to establish when servers are really publishing something of their own, and when they serve as conduits or fora for the publications of others. Certainly, we should value and protect the editorial discretion of a newspaper over its web site and the articles it publishes. But we should not award “editorial discretion” to Craigslist over the ads that people post. An argument could be made that Google uses “editorial discretion” in the construction of its search results, and particularly in selling promotional space at the top of the results page; I’m not sure how that would come down.
In case law, thanks to Roommates.com in the 9th Circuit, even a minimalist transformation of the user input constitutes a publication by the web site and defeats the statutory safe harbor (a decision with which I disagree). Google would certainly meet that standard, as would most servers… but not a router.
I went to Chris Yoo’s talk, and I asked him afterwards whether awarding “editorial discretion” to ISPs meant that he thought ISPs were “publishing” the entire Internet. He didn’t really respond.