Immigration and the “Rule of Law”

Was my last post, despite its claims to the contrary, a brief for open-borders zealotry? That seems to have been the reaction of a number of commenters and folks on Twitter. Josh Barro, for example, tweeted “I’m not sure there’s a right to live in America.”

A lot of people seem to believe that raising moral objections to an immigration enforcement program like e-verify is tantamount to advocating the repeal of all immigration restrictions. The more I think about this proposition, the less sense it makes. To return to one of my favorite examples: speeding is illegal, but laws against speeding are routinely ignored. The government enforces those laws haphazardly; perhaps one in a thousand speeders on any given freeway are caught.

Now, if we really wanted to, we could get people to stop speeding. For example, we could install license-plate-reading cameras along the freeway at regular intervals, and automatically send tickets to anyone who moves from one camera to the next too quickly to have been following the speed limit.

Personally, I think this is a horrible idea. One reason is that this kind of massive surveillance infrastructure could be misused for other, more sinister purposes. Objecting to this particular enforcement mechanism on civil libertarian grounds isn’t the same as saying people have a “right to speed,” or that we should repeal all speeding laws. We have any number of laws, jaywalking, peer-to-peer file sharing, paying taxes on goods we buy online, and so forth, that for a variety of practical reasons are hard to enforce, and we just live with the fact that they’re routinely broken.

The same point applies to immigration. Entering the country without government permission is illegal, and probably should be so. The federal government has any number of powers to enforce the law, including refusing to let you cross the border (leave the airport, etc), investigating over-stayed visas, limiting access to driver’s licenses, auditing employers, deporting people, and so forth. Objecting to any particular immigration enforcement mechanism isn’t the same thing as objecting to immigration regulations altogether. It’s perfectly coherent to say that the government should make a reasonable effort to prevent people from moving here illegally, but that certain types of particularly invasive enforcement methods (like employer verification) should be off the table. This is just how our legal system works.

But I also think speeding cameras are a bad idea because I sometimes think the posted speed limit is too low and I like the fact that I can ignore it and (mostly) not get caught. Similarly, our copyright laws are too strict; it’s a good thing that people can sometimes share content in circumstances that a strict reading of the law wouldn’t allow. In other words, the fact that people can mostly get away with breaking certain laws is a feature, not a bug, of our legal system. It provides a “safety valve” that ensures that stupid legislation doesn’t do too much damage.

The same point applies to immigration law. Obviously, we ought to enact sane immigration laws that make it easy for people like Jose Vargas to get a green card. But given that we haven’t done that, it’s a good thing—both for him and for the rest of us—that our enforcement system wasn’t effective enough to prevent him from taking a job here.

Again, there’s a huge double standard here. We American citizens take a strictly moralistic tone toward laws that we don’t personally have to follow. But “the rule of law” goes out the window when it comes to that pot you smoked in college, or the use taxes you haven’t paid on your Amazon purchases, or those pirated MP3s on your hard drive. When we’re talking about laws that actually affect us, we’re glad there’s some breathing room between the law on the books and what people actually get punished for.

We should display the same kind of magnanimity toward people who have to deal with our immigration system, which is much, much more screwed up than our copyright and traffic laws. Jose Vargas didn’t hurt anyone when he illegally entered the country as a teenager, just as Barack Obama didn’t hurt anyone when he illegally smoked pot in college. Law enforcement has, correctly, turned a blind eye to Obama’s youthful lawbreaking. It should do the same for Vargas and thousands of others like him.

Posted in Uncategorized | 16 Comments

Jose Antonio Vargas and the Politics of Compassion

Jose Antonio Vargas’s riveting story about life as an undocumented immigrant has been taking the Internet by storm. It powerfully illustrates the contrast between our nation’s professed ideals of equality and opportunity and the actual, shameful results of the laws we have allowed our government to enact.

As I’ve written before, I think the fundamental problem is that most American voters don’t understand our own immigration system. Though few undocumented immigrants succeed as spectacularly as Vargas has, there are millions of undocumented Americans who, like him, have been hampered in their pursuit of freedom and opportunity by our immigration laws. Many American voters angrily demand that immigrants “get in line” for their green cards, ignoring the fact that for many undocumented immigrants, there is no line they could wait in that will get them a green card in the foreseeable future.

It’s interesting that Vargas mentions coming out of the closet because I think many immigration advocates could learn from success of the gay rights movement. Ignorant anti-immigrant beliefs are driven by the same kind of intellectual laziness people always display when thinking about people different from themselves. Go back to the 1970s and you’ll find millions of people who didn’t consider themselves to be bigots but harbored fundamentally bigoted beliefs about gay people. Go a little further back and you’ll find millions of whites who didn’t consider themselves racists but who would readily repeat crude stereotypes about blacks and tacitly supported America’s system of racial apartheid.

The same basic dynamic is at work in modern immigration debate. Hardly anyone considers himself an anti-immigrant bigot, but a large majority of Americans tacitly endorse ridiculous, discriminatory immigration laws that make it virtually impossible for people like Jose Antonio Vargas to become full-fledged members of our society. They demand a level of law abidingness from undocumented immigrants (even those who have been here since childhood) that they would never tolerate if applied to themselves.

Eradicating racism from polite society wasn’t simply a matter of evidence and argument. Rather, it was accomplished through a consciously ideological project to stigmatize bigotry. Making prejudicial comments about black people doesn’t just get you a strong counter-argument, it can lose you friends and even your job. A similar ideological project, typified by Seinfeld‘s “not that there’s anything wrong with that,” is making rapid progress on the gay rights front.

People don’t really think about immigration debates in these terms. Even most liberals talk about immigration in terms of economic efficiency and citizen self-interest. During the 2007 immigration debate, my friend Ezra Klein actually complained that business interests were trying to weaken “employer verification” laws that would have made it even harder for people like Jose Antonio Vargas to find a job. Whatever else you might say about this position, it’s not one that treats undocumented immigrants as human beings deserving compassion and fair treatment.

More to the point, this kind of transactional politics—give us a guest worker program and we’ll support beefing up the surveillance state—isn’t going to work. Once established, an “employer verification” system will be with us forever, whereas the next Congress can easily scale back or cancel the guest worker program. At the same time, advocacy for such a bargain reinforces the basic restrictionist worldview that the interests of Americans and immigrants are fundamentally opposed.

What’s needed, instead, is a serious effort to get people to think of immigrants as human beings who deserve to be treated fairly. You don’t have to be an open-borders zealot to think that we’ve been terribly unfair to Vargas and Eric Balderas. We should change the law to allow people like them earn a living not because doing so would be good for the American economy (though it would) but because we’re a country founded on the proposition that all men are created equal.

Congress is poised to pass “e-verify” legislation that will make life much worse for people like Vargas, as well as seriously inconveniencing a bunch of citizens. Libertarians like my friend Jim Harper have been beating the drum about this issue for years, and the ACLU has also been active in opposing it. But it hasn’t gotten much focus on the left more generally. And the few critiques I’ve seen have focused either on the system’s poor accuracy or the losses it would inflict on the agriculture sector.

These are both valid arguments. But I’d like to see more people—and especially more liberals—questioning the whole concept of constructing a massive surveillance system so the government can more effectively prevent people like Jose Antonio Vargas from earning a living.

Posted in Uncategorized | 33 Comments

Competition in the Banking Industry

As Erik Kain notes, the point I made yesterday isn’t limited to the telecommunications industry. It applies with equal force in banking.

A good example of this principle at work is Cato scholar Lawrence White’s 2004 call for greater regulation of Fannie Mae and Freddie Mac. White argued that full repeal of these companies’ various state-granted privileges would be the best way to deal with these entities. But given that that was unlikely to happen he advocated more aggressive regulation as a second-best alternative. In retrospect, it’s obvious that this was the right position to take.

The same point applies to the “too big to fail” banks. In an ideal world, the bailouts wouldn’t have happened and most of these firms would be emerging from bankruptcy around now. But we don’t live in that world, and we’re not likely to get there before the next financial crisis.

Probably the most important moment in last year’s fight over banking regulation was the failure of the Kaufman-Brown amendment, which would have established a maximum size for banks in an effort to forestall future “too big to fail” problems. It was supported by a handful of savvy conservatives like Tim Carney, but most free-market conservatives and libertarians ignored it. I’m not going to point any fingers, since I was barely paying attention to the financial reform debate myself. But I do wish more supporters of the free market had followed Carney’s lead.

Posted in Uncategorized | 2 Comments

A Lost Consensus on Deregulation and Competition

Everyone knows that the contemporary telecom debate pits free-market opponents of regulation against progressives who want a more activist government. But if that’s what you’re expecting, then the 1970s and early 1980s look very puzzling. You had the Democratic Carter administration and left-wingers like Ted Kennedy pushing to deregulate major industries. And you had a government crusade to break up Ma Bell that was launched by the Republican Ford administration and completed by the conservative Reagan administration.

To understand what was going on, we have to look back even further in history. The transpartisan enthusiasm for these policies emerged as a reaction to an ideology that will seem alien to modern observers. For the majority of the 20th century, this reining orthodoxy held that central planning was efficient and too much competition was destructive. Tim Wu called it Vailism, after the AT&T president who convinced the federal government to make AT&T a regulated monopoly. And it’s closely connected to James Scott’s concept of high modernism.

This ideology shaped much of Franklin D. Roosevelt’s New Deal. His National Recovery Administration had as its explicit goal to help industries form cartels so they could raise prices. That sounds insane to modern ears, but it wasn’t an aberration; the same attitude underpinned much mid-century policymaking. The Roosevelt administration created or expanded a number of government agencies, including the Federal Communications Commission, the Interstate Commerce Commission, and the Civil Aeronautics Board, which openly discouraged new entrants into the industries they regulated in order to prop up the incumbents’ profits.

The high modernist consensus against competition only started to unravel in the 1960s when economists like George Stigler documented how economically damaging these anticompetitive policies were. In the 1970s, their arguments started to make an impression inside the beltway. When Stephen Breyer took a break from teaching law at Harvard to work on the Hill for Ted Kennedy, he brought the emerging academic consensus with him.

Libertarians (including this one) like to point this out to liberals as a kind of “gotcha” story: even Ted Kennedy supported deregulation. But it’s important to remember that this coin has two sides. It’s equally true that even the Reagan administration supported the breakup of AT&T. And that’s not all.

Consider the Computer Inquiries, a series of regulations designed to prevent AT&T from dominating the nascent market for online services. It will surprise no one that this activist, big-government regulatory project was inaugurated by the Johnson administration. But it didn’t end with Johnson. The process produced three major orders over almost two decades, and there was remarkable continuity among the Johnson, Nixon, Ford, Carter, and Reagan administrations.

This consensus—repeal anticompetitive laws while actively protecting new entrants from the incumbents—survived the AT&T breakup. Indeed, the 1996 Telecommunications Act, which was passed by the conservative Gingrich Congress, is based on the same basic intellectual framework. It relaxed various restrictions on telephone and cable companies entering new markets, while simultaneously instituting an “unbundling” regime that forced incumbent telephone carriers to lease parts of their networks to competitors at regulated rates.

This might look like a philosophically confused mixture of deregulation and re-regulation, but I don’t think that’s how the legislation’s authors saw it. Rather, the unifying theme of the act was competition. Both the regulatory and deregulatory provisions of the bill were designed to increase the number of firms in various telecommunications markets.

That consensus has evaporated over the last 15 years, replaced by the pro- and anti-regulatory camps that are so familiar today. My sympathies are generally with the anti-regulatory camp, but I’m starting to think we’ve lost some important insights from that earlier consensus.

Once a “private” company becomes deeply intertwined with the state, it can be difficult to ever fully separate them. Formally repealing state privileges may not fully undo the damage if the incumbent continues to enjoy the fruits of past favoritism. And incumbents can leverage their intimate knowledge of the regulatory process—and decades of political capital accumulated from past interaction with regulators—to twist facially neutral regulations into weapons against their competitors.

This means that deregulated incumbents like AT&T and Verizon may never become fully private entities. And so a truly free-market agenda requires more than just reflexively opposing all government interventions in the telecommunications market. The government is not monolithic. Sometimes (as with the AT&T breakup and the Computer Inquiries) one part of the government works to check the harmful policies of another.

This principle is complicated, and reasonable people are going to disagree about how best to apply it. But one of the most obvious ways to check the power of incumbents is by making sure they have plenty of competitors. Competitive markets make regulators’ jobs easier because they force companies to serve consumers well even when regulators aren’t watching. So if regulators see a nice, clean opportunity to preserve or expand competition, they should probably take advantage of it.

The market and the political system are not separate, hermetically sealed spheres. It’s obvious that regulatory decisions shape the evolution of the market, but the evolution of the market shapes the options available to regulators. Promoting competition today will strengthen the case for deregulation tomorrow. Policies that undermine competition today will strengthen political pressures for regulation tomorrow.

An earlier generation of free-market economists understood this. And one way or the other, it’s a lesson we’re going to learn again. I just hope we don’t have to learn it the hard way.

Posted in Uncategorized | 7 Comments

The HuffPo Sweatshop and the Decline of Labor

There’s been an interesting back-and-forth in the left-of-center blogosphere over efforts to organize a boycott of the Huffington Post for its practice of allowing volunteer bloggers to contribute to the site. The case for the boycott seems so obviously wrong that it’s hard to muster the energy to write a rebuttal, so if you’re interested you can read Matt and Julian‘s responses.

But I think the dispute is an interesting window into the state of the contemporary labor movement. Private sector unionization has been dwindling for decades, and of course the labor movement isn’t happy about it. The HuffPo boycott gives us an interesting way of thinking about that decline and why it’s not likely to be reversed any time soon.

The high point of unionization occurred among factory workers in the early-to-mid 20th centuries. Unions thrived in highly concentrated industries like cars and steel where the lack of competition produced generous profit margins. In this top-down environment, ordinary workers had very little leverage because they had few alternative places of employment. Unions offered workers collective leverage over issues like safety and work hours. And they also helped workers seize a share of the monopoly profits their employers enjoyed.

In recent decades, many of these oligopolies have been disrupted by a combination of technological progress and world trade. The steel mills and car plants that were generating obscene profits a half-century ago are now struggling to stay in business. Workers have increasingly shifted to more competitive industries where profit margins are smaller and a real exit option gives employees more bargaining power.

The publishing industry is an extreme example of the trend. The classic daily newspaper was a large, hierarchical company that often enjoyed a monopoly (or at least an oligopoly) in its local market. It used to provide hundreds of blue-collar jobs for typesetters, electricians, truck drivers, and so forth. In many cities, newspaper typesetters wouldn’t have had a lot of alternative places to work, and so the protections of a union contract were extremely valuable.

The publishing industry is changing in two ways. First, most of those blue collar jobs are disappearing. A modern news organization is a team of reporters supported by editors, graphic designers, IT workers, ad salesmen, and other white-collar professionals. Second, very few news organizations are insulated from competition the way most newspapers were in 1980. This means that most news organizations couldn’t raise their workers’ wages very much even if they wanted to.

The Huffington Post is an extreme example of both trends. It’s an online-only publication consisting almost entirely of white-collar workers. And although AOL paid a lot of money for the company, this appears to have been more a reflection of its expected growth potential than actual profits. Given the intense competition in the online news business, those profits are far from guaranteed and may or may not last for very long.

The Newspapers Guild and the National Writers Union yearn for the return of the good old days where their members’ employers enjoyed monopoly profits that they could be induced to share with the employees. Apparently these organizations have convinced themselves that the AOL buyout of HuffPo is a sign that the glory days are coming back. But they’re just seeing what they want to see. The extremely low barriers to entry in Internet news means that the industry is unlikely to resemble the 20th century newspaper business any time soon.

And this means that unions don’t have much to offer 21st century writers. The problem we writers face isn’t that our employers are raking in obscene monopoly profits and not sharing them with us. The problem is that there are far more people who want to write than there are publications able to pay writers. If publishers do start to rake in obscene profits, it’s likely that they’ll plow some of those profits back into their business by hiring more writers. But forcing one particular publication to stop running volunteer content does nothing to change the dynamics of the writing market. The Newspapers Guild and the National Writers Union are basing their actions on an economic model that’s decades out of date.

Posted in Uncategorized | Leave a comment

Bessen on Measuring Software-driven Growth

In the conclusion of my interview with James Bessen we talk about the difficulty of measuring software-driven economic growth, a topic I’ve written about before.

Timothy B. Lee: How should we think about the value that consumers get from the rapid technological changes you’ve described?

James Bessen: Cowen argues that innovation today isn’t the same quality as it was in his grandmother’s day. I think you have to be very careful about that because while it’s true that innovation today tends to be qualitatively different in a couple of ways, that doesn’t necessarily mean it’s any less significant.

The innovations of a hundred years ago were a lot about the mass production of standardized goods. You think about the automobile or electrical appliances, these were things that affected most people and so you can look at an invention from that time, like the automobile was something that most people eventually used. Electrification was also something most people used. It affected most lives. That was because these were standardized products produced for a mass market.

In contrast, information technology is about meeting custom needs. It allows things to be tailored.

What’s an example of that?

One of the things that the desktop publishing revolution did was that it allowed tailored advertising. Software allowed A&P to target advertisements very finely. They could work from a database, track the items, automatically modify the flyers that were going out to each neighborhood—geared to the demographics of those neighborhoods and the particular things they were selling in those stores. It all worked in a very efficient way. It would’ve been possible to do that before, but it would’ve been way too costly.

This is an example of flexible manufacturing, which is also useful in terms of producing goods that are tailored to peoples’ needs.

Today’s super markets carry 50 times as many items as the grocery store of 80 years ago. That’s made possible by flexible manufacturing, computerized logistics, inventory control. Supermarkets have computerized systems for keeping track of what’s being sold at the register, what’s being shipped from the warehouse, and so forth.

Last time I was in the supermarket, I counted and I think there were 12 kinds of apples, 10 different tomatoes, etc. Which I’m guessing would not have been true in the 1970s.

I remember when I was in college, I went to dinner at a friend’s house. They were Italian, and we had to go down to the North End in Boston to get Italian sausage with fennel seeds. Now you can find that all over the country in all sorts of neighborhoods.

Obviously somebody is buying these things. None of these things affect most people like the automobile did. But most people are benefitting from some of these things. So it’s a qualitatively different technology. It’s much harder to measure its impact, with all of these products. It’s very hard to judge what the different qualities of those 12 types of apples, or 20 types of olive oil, or whatever it is.

This must make it hard for the statisticians at the Beureau of Labor Statistics to compute inflation rates.

It’s an impossible problem to measure those things. Quality change is difficult enough to measure for something like an automobile or a computer. Here you’re talking about two orders of magnitude more products. So in a sense, innovation and technological change have a very different feel today than they did in Grandma’s day, but that doesn’t mean that they’re anything less.

Posted in Uncategorized | 4 Comments

James Bessen on the Great Stagnation

One of my favorite scholars is James Bessen, a lecturer at Boston University and a fellow at Harvard’s Berkman Center. A Harvard graduate, he founded a company that created one of the first desktop publishing systems and helped revolutionize the publishing industry. He sold that company in 1993 and has since become a self-trained academic economist.

Some of his most important work has been on patents. He wrote an excellent paper on software patents with future Nobel laureate Eric Maskin. And with Michael Meurer, he wrote Patent Failure, a fantastic book I have promoted at every opportunity.

Patents are one aspect of Bessen’s larger research agenda, which is focused on understanding the process of innovation and the policies that encourage it. To that end, he has been doing some in-depth research into the history of innovation. In this interview, he talks about his findings on the history of weaving technology, his own experiences in the desktop publishing industry, and what those experiences tell us about the alleged “great stagnation” of our own era. My questions are in bold, and his responses are in ordinary type. I’ll post a bit more of the interview tomorrow.

Timothy B. Lee: I think a lot of people have a sense that the rise of the Internet and the software industry are pretty exceptional. On the other hand, Tyler Cowen has argued that the changes of the last 40 years are actually less dramatic than those of his grandmother’s lifetime. Where do you come down on this question?

James Bessen: Cowen argues that we’ve picked all the low-hanging ideas and that we’re running out of good ideas. Other people have a sense that we’re sort of in the midst of a technological revolution. The central paradox is that for the past 3 decades, during the rise of the personal computer, wages, at least, have stagnated. There’s this sense that we’re doing all this innovation, we’re coming up with new technology, but we’re not seeing the economic fruits of it like we did in the past.

So maybe this is just frivolous technology, not “real” innovation. Grandma got indoor plumbing and we’re getting social networking.

Cowen trots out things like patent statistics, but he unfortunately gets it wrong. For starters, patents are not measuring innovation, they’re measuring industrial strategies. In terms of the number of patents granted, even just to domestic innovators, it’s at an all-time high. If you weight it per capita, it’s a little bit less than it was in the late 19th century, but not very much. So patents are not a clear indication of a great stagnation.

But isn’t the relatively slow growth of wages and GDP evidence that we’re not producing as many good ideas as we used to?

People think these great inventors have these great ideas which then just go out and immediately revolutionize society and produce all of these benefits. So the fact that we’re seeing lots of technology, lots of innovation, and yet not seeing the economic benefit seems to say, “well, something’s wrong with those ideas.”

But if you look in the past, technology has never been about simple inventions revolutionizing society directly. It’s always been about them providing an opportunity, but that opportunity requires the development of all sorts of new knowledge by large numbers of people–people who are going to use it, people who are going to work with it, people who are going to build it, and that’s very often a process that takes decades.

You’ve studied 19th century weaving technology as an example of this process, right?
Continue reading

Posted in Uncategorized | 2 Comments

Wilkinson on Spending and Limited Government

Will Wilkinson couldn’t be more right about this:

I would argue that at least half of America’s military spending provides no benefit whatsoever to Americans outside the military-industrial welfare racket. But the other half may be doing some pretty important work. Rather than arguing dogmatically for a higher or lower level of total spending, it would be nice if we could focus a little and argue for and against the value of different kinds of spending, and then to focus a little more on the value of different ways of spending within budget categories. Some government spending gives folks stuff they want. Some government spending is worse than stealing money, throwing it in a hole and burning it. This is obvious when you think about it for a second, but it sometimes seems that partisan political discourse is based on the refusal to think about it at all. Conservatives with a libertarian edge often proceed as if government spending as such is an evil to resist, except when they’re defending a free-lunch tax cut (we’ll have more money to wrongly spend!) or the ongoing development of experimental underwater battle helicopters. And liberals with a social-democratic streak often operate within a framework of crypto-Keynesian mysticism according to which handing a dollar to government is like handing a fish to Jesus Christ, the ultimate multiplier of free lunches. When debate takes place on these silly terms, it seems almost impossible to articulate a vision of lean and limited government with principled, rock-solid support for spending on social insurance, education, basic research, essential infrastructure, and necessary defence, despite the likelihood that something along these lines is what most Americans want.

I’ve made made a similar point in the past. And here’s more from Bruce Bartlett.

Posted in Uncategorized | Leave a comment

Coping with IP Address Scarcity

On Wednesday, I argued that collective action problems will delay the transition to IPv6 for many years, and possibly forever.

The obvious response is that the world doesn’t have a choice. The majority of the world’s population isn’t yet on the Internet and in rich companies the number of devices per person continues to rise. So IPv4’s limit of 4 billion IP addresses has to give at some point, right?

Maybe, but it’s important to remember two points: first, IP addresses can be shared. Indeed, many of them already are. My household has two people and around a dozen devices that share our single IP address. If a decade from now we have 20 or 50 Internet-connected devices, there’s every reason to think that those devices, too, will be able to comfortably share a single IP address.

There is a theoretical limit. Network Address Translation, the technology I described in my last post uses something called a “port number” to disambiguate among hosts on the private network, and the IP protocol allows there to be around 64,000 ports. Very roughly speaking, this means that a single IP address has a theoretical limit of 64,000 simultaneous connections, though a variety of issues make the practical limit lower. In any event, while the number of machines that can share an IP address is not infinite, there’s still a lot of headroom.

The second point is that currently-allocated IP addresses are not being used efficiently. This is an artifact of the era when addresses were plentiful. ISPs could request them essentially for free, and so they didn’t have much incentive to economize. As the official exhaustion point has drawn closer, ISPs have gradually begun to use them more efficiently, but there’s still a lot of room for improvement.

For example, in the Internet’s early days, a number of large organizations, including Apple, MIT, and Ford, were allocated “Class A” blocks of 16 million IP addresses apiece. Apple is a big company that’s probably using hundreds of thousands of IP addresses, but that still leaves plenty of spare capacity it could transfer to someone who needed it more. The reason they haven’t done so, presumably, is that there was no particular incentive to do so. Renumbering a large company’s network is a pain, and as long as ISPs could get IP addresses for free, they had no reason to pay Apple for its trouble

But this is where supply and demand come in. Now that IP addresses are no longer available for the asking, growing ISPs will be increasingly desperate to get their hands on more. Sooner or later, we should expect a market to develop. Apple’s not going to give someone its IP addresses just because they ask nicely, but if someone were willing to pay $10 or $100 per address, then it might be worth Apple’s trouble to go through the hassle of re-numbering its network and relinquishing its addresses.

Network administrators hate thinking of addresses as a scarce resource to be conserved—both because it makes their job harder and because there’s no theoretical reason for addresses to be scarce. But using prices to allocate IPv4 addresses where they’re needed most can extend the useful life of IPv4 for a long time. And given the obstacles to the switch, this may be necessary.

If and when we do eventually move to IPv6, I suspect it will be because the price of IPv4 addresses has risen so high that switching to IPv6 becomes a cost-cutting move. ISPs that make the switch will still need IPv4 addresses to talk to other IPv4-connected hosts, but by routing some of their customers’ traffic over IPv6, they can reduce the number of IPv4 connections per customer and squeeze more customers onto each IPv4 address.

This is likely to happen first someplace like China or India where they have more people and less money than we do in the US. Developing countries joined the Internet late, and as a consequence they already face a more serious shortage of IPv4 addresses.

To be clear, none of this is to say that it’s desirable to stay on IPv4. A shortage of network addresses introduces a number of performance problems and administrative headaches that would be avoided if we all moved to IPv6. But the fact that it’s desirable doesn’t mean it will happen any time soon.

Posted in Uncategorized | Leave a comment

Is IPv6 Doomed?

Today is World IPv6 Day. That’s the day a number of Internet heavyweights are testing out their readiness for the next version of IP, the networking protocol that serves as the foundation for the Internet.

The current version of the IP protocol, called IPv4, suffers from a serious weakness: it gives computers addresses that are only 32 bits long, which means that there are only 232, or around 4 billion, possible addresses. That seemed like a large amount when the Internet was just an academic research network back in the 1970s. But on a planet with 7 billion people, it’s beginning to feel a little cramped. IPv6 uses 128-bit addresses, and 2128 is such an enormous number that the world will never again have to worry about running out of address space.

The IPv6 transition has been widely portrayed as inevitable, with some outlets falsely claiming that it will soon be impossible to add a new device to the Internet without an IPv6 address. But it’s not so obvious that this is true. There’s no doubt that it would be beneficial to move the Internet to IPv6, but the transition faces a massive collective action problem. Indeed, I’m starting to suspect that the collective action problem may be so severe that the transition might not happen at all.

To understand the problem, we have to first get into the technical weeds a bit. Network administrators have long used a technology called Network Address Translation to allow multiple client computers to share a single IP address. This is the technology that allows your WiFi router to share your single cable or DSL connection among all the devices in your house. As the name suggests, NAT works by assigning a “private” IP address to each device inside your network, and then “translating” between the public and private IP address spaces.

Network administrators hate NATs because it breaks one of the Internet’s most elegant features: the ability for any two hosts on the network to connect to one another. But the ability to share IP addresses is so useful that the technology has proliferated. And most applications are now designed to gracefully handle working behind a NAT.

Which bring us to the IPv6 transition. The plan is for hosts to gradually transition from using IPv4 addresses to IPv6 addresses. The challenge, though, is that people on the IPv4 network want to be able to talk to people on the IPv6 network, and vice-versa. Getting from IPv6 to IPv4 is no problem; the IPv6 spec allocates a block of IPv6 addresses (of which there’s no shortage) to correspond to IPv4 addresses. But going the other way is hard, because the IPv4 protocol has no way of addressing more than 232 distinct addresses.

There is a mind-boggling array of methods for dealing with this problem, and I couldn’t explain them all to you if I wanted to. But conceptually, there are two options. One is to use what amounts to a huge NAT to translate between IPv6 and IPv4. Every IPv6 host is given a corresponding IPv4 address (which it might share with many others). The IPv4 host communicates with this address, and there’s a gateway that automatically translates these packets between the IPv4 and IPv6 protocols. Under this approach, the IPv4 host can be blissfully unaware it’s talking to an IPv6 host, because all it knows about is the IPv4 address of the gateway.

The other approach is to have hosts be “dual stacked,” meaning that they’re simultaneously maintaining two different (possibly virtual) network connections with two different addresses. Dual-stacked hosts send IPv6 packets to other hosts on the new network, but fall back to IPv4 to communicate with hosts that are only on that network.

Now, the key thing to realize about these methods is that under either approach, IPv4 hosts have zero incentive to switch to IPv6. There are enough IPv4-only hosts around that every IPv6 host will want to find a way to continue communicating on the IPv4 network. And that’s another way of saying that an IPv4 network that ignores the transition won’t face any negative consequences for doing so for a long time. Moreover, under either scheme, every IPv6 host still needs to have an IPv4 address, so switching to IPv6 doesn’t even do much to economize on scarce IPv4 addresses. True, under the NAT-based approach, multiple IPv6 hosts share a single IPv4 address. But most of those address savings can be achieved simply by adopting a regular old IPv4 NAT. If anything, adopting IPv6 just makes things unnecessarily complicated.

To put things another way, no IPv4 host will begin to experience negative consequences from dragging its feet until IPv6 hosts start dropping IPv4 support. And this will happen only after the vast majority of IPv4 hosts have migrated. Given that running two parallel networks is more expensive than running an IPv4 network only, the rational thing to do is to wait for other people to go first.

No one wants to say this because it really is in everyone’s interest for the transition to occur. But it’s not hard to read between the lines. Here, for example, a commentator says “You have to make the transition. It is better to do that sooner than later because it demonstrates that you are a modern, well organised company that is visible on the modern infrastructure of the internet.” This is complete nonsense. The overwhelming majority of users have no idea what IPv6 is and won’t even notice when a company they do business with makes the switch.

So we may be in for a decade-long period wherein everyone talks about the IPv6 transition but only a handful of large companies actually do anything about it. If I’m right, then one of two things will happen. One possibility is that networking elites will eventually realize that the gradual approach is hopeless and lobby for the stiffer medicine of a legislative mandate. The other possibility is that we’ll discover that IPv4 isn’t as bad as we thought, and learn to live with four billion addresses indefinitely. In my next post I’ll examine how we might do that.

Posted in Uncategorized | 5 Comments