James Bessen on the Great Stagnation

One of my favorite scholars is James Bessen, a lecturer at Boston University and a fellow at Harvard’s Berkman Center. A Harvard graduate, he founded a company that created one of the first desktop publishing systems and helped revolutionize the publishing industry. He sold that company in 1993 and has since become a self-trained academic economist.

Some of his most important work has been on patents. He wrote an excellent paper on software patents with future Nobel laureate Eric Maskin. And with Michael Meurer, he wrote Patent Failure, a fantastic book I have promoted at every opportunity.

Patents are one aspect of Bessen’s larger research agenda, which is focused on understanding the process of innovation and the policies that encourage it. To that end, he has been doing some in-depth research into the history of innovation. In this interview, he talks about his findings on the history of weaving technology, his own experiences in the desktop publishing industry, and what those experiences tell us about the alleged “great stagnation” of our own era. My questions are in bold, and his responses are in ordinary type. I’ll post a bit more of the interview tomorrow.

Timothy B. Lee: I think a lot of people have a sense that the rise of the Internet and the software industry are pretty exceptional. On the other hand, Tyler Cowen has argued that the changes of the last 40 years are actually less dramatic than those of his grandmother’s lifetime. Where do you come down on this question?

James Bessen: Cowen argues that we’ve picked all the low-hanging ideas and that we’re running out of good ideas. Other people have a sense that we’re sort of in the midst of a technological revolution. The central paradox is that for the past 3 decades, during the rise of the personal computer, wages, at least, have stagnated. There’s this sense that we’re doing all this innovation, we’re coming up with new technology, but we’re not seeing the economic fruits of it like we did in the past.

So maybe this is just frivolous technology, not “real” innovation. Grandma got indoor plumbing and we’re getting social networking.

Cowen trots out things like patent statistics, but he unfortunately gets it wrong. For starters, patents are not measuring innovation, they’re measuring industrial strategies. In terms of the number of patents granted, even just to domestic innovators, it’s at an all-time high. If you weight it per capita, it’s a little bit less than it was in the late 19th century, but not very much. So patents are not a clear indication of a great stagnation.

But isn’t the relatively slow growth of wages and GDP evidence that we’re not producing as many good ideas as we used to?

People think these great inventors have these great ideas which then just go out and immediately revolutionize society and produce all of these benefits. So the fact that we’re seeing lots of technology, lots of innovation, and yet not seeing the economic benefit seems to say, “well, something’s wrong with those ideas.”

But if you look in the past, technology has never been about simple inventions revolutionizing society directly. It’s always been about them providing an opportunity, but that opportunity requires the development of all sorts of new knowledge by large numbers of people–people who are going to use it, people who are going to work with it, people who are going to build it, and that’s very often a process that takes decades.

You’ve studied 19th century weaving technology as an example of this process, right?
Continue reading

Posted in Uncategorized | 2 Comments

Wilkinson on Spending and Limited Government

Will Wilkinson couldn’t be more right about this:

I would argue that at least half of America’s military spending provides no benefit whatsoever to Americans outside the military-industrial welfare racket. But the other half may be doing some pretty important work. Rather than arguing dogmatically for a higher or lower level of total spending, it would be nice if we could focus a little and argue for and against the value of different kinds of spending, and then to focus a little more on the value of different ways of spending within budget categories. Some government spending gives folks stuff they want. Some government spending is worse than stealing money, throwing it in a hole and burning it. This is obvious when you think about it for a second, but it sometimes seems that partisan political discourse is based on the refusal to think about it at all. Conservatives with a libertarian edge often proceed as if government spending as such is an evil to resist, except when they’re defending a free-lunch tax cut (we’ll have more money to wrongly spend!) or the ongoing development of experimental underwater battle helicopters. And liberals with a social-democratic streak often operate within a framework of crypto-Keynesian mysticism according to which handing a dollar to government is like handing a fish to Jesus Christ, the ultimate multiplier of free lunches. When debate takes place on these silly terms, it seems almost impossible to articulate a vision of lean and limited government with principled, rock-solid support for spending on social insurance, education, basic research, essential infrastructure, and necessary defence, despite the likelihood that something along these lines is what most Americans want.

I’ve made made a similar point in the past. And here’s more from Bruce Bartlett.

Posted in Uncategorized | Leave a comment

Coping with IP Address Scarcity

On Wednesday, I argued that collective action problems will delay the transition to IPv6 for many years, and possibly forever.

The obvious response is that the world doesn’t have a choice. The majority of the world’s population isn’t yet on the Internet and in rich companies the number of devices per person continues to rise. So IPv4’s limit of 4 billion IP addresses has to give at some point, right?

Maybe, but it’s important to remember two points: first, IP addresses can be shared. Indeed, many of them already are. My household has two people and around a dozen devices that share our single IP address. If a decade from now we have 20 or 50 Internet-connected devices, there’s every reason to think that those devices, too, will be able to comfortably share a single IP address.

There is a theoretical limit. Network Address Translation, the technology I described in my last post uses something called a “port number” to disambiguate among hosts on the private network, and the IP protocol allows there to be around 64,000 ports. Very roughly speaking, this means that a single IP address has a theoretical limit of 64,000 simultaneous connections, though a variety of issues make the practical limit lower. In any event, while the number of machines that can share an IP address is not infinite, there’s still a lot of headroom.

The second point is that currently-allocated IP addresses are not being used efficiently. This is an artifact of the era when addresses were plentiful. ISPs could request them essentially for free, and so they didn’t have much incentive to economize. As the official exhaustion point has drawn closer, ISPs have gradually begun to use them more efficiently, but there’s still a lot of room for improvement.

For example, in the Internet’s early days, a number of large organizations, including Apple, MIT, and Ford, were allocated “Class A” blocks of 16 million IP addresses apiece. Apple is a big company that’s probably using hundreds of thousands of IP addresses, but that still leaves plenty of spare capacity it could transfer to someone who needed it more. The reason they haven’t done so, presumably, is that there was no particular incentive to do so. Renumbering a large company’s network is a pain, and as long as ISPs could get IP addresses for free, they had no reason to pay Apple for its trouble

But this is where supply and demand come in. Now that IP addresses are no longer available for the asking, growing ISPs will be increasingly desperate to get their hands on more. Sooner or later, we should expect a market to develop. Apple’s not going to give someone its IP addresses just because they ask nicely, but if someone were willing to pay $10 or $100 per address, then it might be worth Apple’s trouble to go through the hassle of re-numbering its network and relinquishing its addresses.

Network administrators hate thinking of addresses as a scarce resource to be conserved—both because it makes their job harder and because there’s no theoretical reason for addresses to be scarce. But using prices to allocate IPv4 addresses where they’re needed most can extend the useful life of IPv4 for a long time. And given the obstacles to the switch, this may be necessary.

If and when we do eventually move to IPv6, I suspect it will be because the price of IPv4 addresses has risen so high that switching to IPv6 becomes a cost-cutting move. ISPs that make the switch will still need IPv4 addresses to talk to other IPv4-connected hosts, but by routing some of their customers’ traffic over IPv6, they can reduce the number of IPv4 connections per customer and squeeze more customers onto each IPv4 address.

This is likely to happen first someplace like China or India where they have more people and less money than we do in the US. Developing countries joined the Internet late, and as a consequence they already face a more serious shortage of IPv4 addresses.

To be clear, none of this is to say that it’s desirable to stay on IPv4. A shortage of network addresses introduces a number of performance problems and administrative headaches that would be avoided if we all moved to IPv6. But the fact that it’s desirable doesn’t mean it will happen any time soon.

Posted in Uncategorized | Leave a comment

Is IPv6 Doomed?

Today is World IPv6 Day. That’s the day a number of Internet heavyweights are testing out their readiness for the next version of IP, the networking protocol that serves as the foundation for the Internet.

The current version of the IP protocol, called IPv4, suffers from a serious weakness: it gives computers addresses that are only 32 bits long, which means that there are only 232, or around 4 billion, possible addresses. That seemed like a large amount when the Internet was just an academic research network back in the 1970s. But on a planet with 7 billion people, it’s beginning to feel a little cramped. IPv6 uses 128-bit addresses, and 2128 is such an enormous number that the world will never again have to worry about running out of address space.

The IPv6 transition has been widely portrayed as inevitable, with some outlets falsely claiming that it will soon be impossible to add a new device to the Internet without an IPv6 address. But it’s not so obvious that this is true. There’s no doubt that it would be beneficial to move the Internet to IPv6, but the transition faces a massive collective action problem. Indeed, I’m starting to suspect that the collective action problem may be so severe that the transition might not happen at all.

To understand the problem, we have to first get into the technical weeds a bit. Network administrators have long used a technology called Network Address Translation to allow multiple client computers to share a single IP address. This is the technology that allows your WiFi router to share your single cable or DSL connection among all the devices in your house. As the name suggests, NAT works by assigning a “private” IP address to each device inside your network, and then “translating” between the public and private IP address spaces.

Network administrators hate NATs because it breaks one of the Internet’s most elegant features: the ability for any two hosts on the network to connect to one another. But the ability to share IP addresses is so useful that the technology has proliferated. And most applications are now designed to gracefully handle working behind a NAT.

Which bring us to the IPv6 transition. The plan is for hosts to gradually transition from using IPv4 addresses to IPv6 addresses. The challenge, though, is that people on the IPv4 network want to be able to talk to people on the IPv6 network, and vice-versa. Getting from IPv6 to IPv4 is no problem; the IPv6 spec allocates a block of IPv6 addresses (of which there’s no shortage) to correspond to IPv4 addresses. But going the other way is hard, because the IPv4 protocol has no way of addressing more than 232 distinct addresses.

There is a mind-boggling array of methods for dealing with this problem, and I couldn’t explain them all to you if I wanted to. But conceptually, there are two options. One is to use what amounts to a huge NAT to translate between IPv6 and IPv4. Every IPv6 host is given a corresponding IPv4 address (which it might share with many others). The IPv4 host communicates with this address, and there’s a gateway that automatically translates these packets between the IPv4 and IPv6 protocols. Under this approach, the IPv4 host can be blissfully unaware it’s talking to an IPv6 host, because all it knows about is the IPv4 address of the gateway.

The other approach is to have hosts be “dual stacked,” meaning that they’re simultaneously maintaining two different (possibly virtual) network connections with two different addresses. Dual-stacked hosts send IPv6 packets to other hosts on the new network, but fall back to IPv4 to communicate with hosts that are only on that network.

Now, the key thing to realize about these methods is that under either approach, IPv4 hosts have zero incentive to switch to IPv6. There are enough IPv4-only hosts around that every IPv6 host will want to find a way to continue communicating on the IPv4 network. And that’s another way of saying that an IPv4 network that ignores the transition won’t face any negative consequences for doing so for a long time. Moreover, under either scheme, every IPv6 host still needs to have an IPv4 address, so switching to IPv6 doesn’t even do much to economize on scarce IPv4 addresses. True, under the NAT-based approach, multiple IPv6 hosts share a single IPv4 address. But most of those address savings can be achieved simply by adopting a regular old IPv4 NAT. If anything, adopting IPv6 just makes things unnecessarily complicated.

To put things another way, no IPv4 host will begin to experience negative consequences from dragging its feet until IPv6 hosts start dropping IPv4 support. And this will happen only after the vast majority of IPv4 hosts have migrated. Given that running two parallel networks is more expensive than running an IPv4 network only, the rational thing to do is to wait for other people to go first.

No one wants to say this because it really is in everyone’s interest for the transition to occur. But it’s not hard to read between the lines. Here, for example, a commentator says “You have to make the transition. It is better to do that sooner than later because it demonstrates that you are a modern, well organised company that is visible on the modern infrastructure of the internet.” This is complete nonsense. The overwhelming majority of users have no idea what IPv6 is and won’t even notice when a company they do business with makes the switch.

So we may be in for a decade-long period wherein everyone talks about the IPv6 transition but only a handful of large companies actually do anything about it. If I’m right, then one of two things will happen. One possibility is that networking elites will eventually realize that the gradual approach is hopeless and lobby for the stiffer medicine of a legislative mandate. The other possibility is that we’ll discover that IPv4 isn’t as bad as we thought, and learn to live with four billion addresses indefinitely. In my next post I’ll examine how we might do that.

Posted in Uncategorized | 5 Comments

Can’t Get Enough

A few people asked if there’s an RSS feed available for my Ars Technica articles. The answer from Ars seems to be no, but Dara Lind has kindly created one using Yahoo! Pipes. She’s also got created an an all-Tim feed that combines my Bottom-Up and Ars Technica writing. Thanks Dara!

Posted in Uncategorized | 2 Comments

Google’s Scalable Culture

Way back in November I wrote about the connection between Apple’s beautiful user interfaces and its top-down corporate culture. At the end of that post, I promised to do a follow-up post focusing on Google’s corporate culture. That post has now been written, but because getting paid is better than not getting paid, I’ve done it as an Ars Technica story:

On Monday, Apple unveiled iCloud, a new service for remote storage of user data. Some people, including our own Jon Stokes, are skeptical of Apple’s chances of getting iCloud to work at scale. And history seems to be on their side. iCloud is at least Apple’s fourth attempt to create a viable cloud computing service. The previous incarnations included iTools in 2000, .Mac in 2002, and MobileMe in 2008. As Fortune wrote about MobileMe a few weeks ago, “MobileMe was a dud. Users complained about lost e-mails, and syncing was spotty at best.” iTools and .Mac were not exactly resounding successes either.

Apple’s perennial difficulty with creating scalable online services is not a coincidence. Apple has a corporate culture that emphasizes centralized, developer-led product development. This process has produced user-friendly devices that are the envy of the tech world. But developing fast, reliable online services requires a more decentralized, engineering-driven corporate culture like that found at Google.

Read the rest here. I plan to write about this more but I won’t make any spurious promises about exactly when the follow-up post will be written.

Posted in Uncategorized | 2 Comments

What’s the Right Way to do Email?

For the last 13 years, I’ve been using .edu email addresses. I like running a desktop email client, and although GMail now offers IMAP service, I’ve been trying to minimize my Google exposure. Universities seemed like an innocuous party with whom to trust my private communications.

Now that I’m likely done being a student, I’m thinking more seriously about what my grown-up email setup should look like. In particular, as I’ve written more about privacy law, I’ve become more acutely aware of the poor privacy protections American law affords to email. This is a particular concern given that I’m now working as a reporter. I try to avoid doing stories where revealing my sources could cause serious harms, but I think I still have an obligation to take reasonable precautions to secure my email.

So I have a question for readers: who do you rely on for email service? I know enough about mail server administration to know I shouldn’t try to run a mail server myself. And I’d rather not entrust my email to Google, Yahoo, or Microsoft. I’d be willing to pay $10-20/month for an email service that credibly promises high levels of reliability and confidentiality (though I’m not sure how I’d verify that the confidentiality promises were credible).

I’m also open to arguments that I’m being silly and should just join the GMail parade with the rest of the tech-savvy world.

Posted in Uncategorized | 15 Comments

Monopolies and the Free Market

I was fortunate to have three of today’s smartest libertarian tech policy scholars respond to Thursday’s post about spectrum policy. I was particularly interested in Adam Thierer’s thoughtful response:

In this case, the net result of your advocacy for a Lockean Proviso for spectrum would be a newly empowered bureaucratic regulatory regime imposing a top-down, command-and-control vision on wireless markets. Somehow I don’t think that is consistent with the traditional “bottom-up” thinking at work on this blog!

Preemptive, “Mother-May-I?” regulation isn’t the way to go. For better or worse, antitrust law will probably be with us forever, and if things go disastrously wrong in this market, presumably antitrust officials will intervene. But isn’t it better to let the experiments continue and see what the natural evolution of the marketplace brings us? The burden of proof is on you to show why 5 unelected bureaucrats should micro-manage markets and resources.

What I find interesting about this passage is the tension between the first and second paragraphs. Adam says that “if things go disastrously wrong in this market, presumably antitrust officials will intervene.” This appears to be a grudging admission that if the wireless market gets too concentrated, then the government ought to use its powers under antitrust law to prevent or reverse consolidation.

But why should the government wait until we get all the way to “disastrously wrong” before doing anything? Once you’ve conceded the point that excessive concentration is bad for consumers, and that antitrust law is an appropriate remedy for this harm, it’s not clear what the rationale is for only acting after disaster has struck. Breaking up a merged company or preventing harms via conduct remedies are much more laborious, top-down processes than blocking a merger before it happens.

This has long been a tension in the libertarian approach to antitrust law. Some libertarians, such as Ayn Rand here (around 9:00) argue that monopolies never arise in a free market. Adam himself contributed to this body of thought with this paper arguing that the Bell monopoly was the product of government regulations rather than free markets. I think this argument appeals to many libertarians because if the claim is true, then we don’t need to wrestle with the hard question of what to do when monopolies arise.

But the more I think about this line of reasoning, the more it seems like a non-sequitur. We don’t live in an ideal free market, and monopolies clearly do happen in the actual economy we’ve got. Maybe libertarians are right and they’re the product of government interference in the free market. Maybe we’re wrong and some monopolies would occur even in a perfect free market. But I don’t think this matters if the question is what to do when a market becomes highly concentrated.

Like most libertarians, I suspect that a more liberal spectrum regime would produce more competition in the wireless industry rendering spectrum caps irrelevant. But if anything, this seems to me like an argument for, not against, blocking mergers that would take us farther from the outcome a true free market would produce. The federal government has a responsibility to clean up its own messes, as it did with the Ma Bell breakup in 1984, and it will hopefully do by blocking the AT&T/T-Mobile merger.

Posted in Uncategorized | 11 Comments

The Lockean Proviso in Spectrum Policy

I’ve been following the ongoing debate over the AT&T/T-Mobile merger with interest. As regular readers have probably guessed, I have a lot of sympathy for the arguments of merger opponents. Going from four national wireless carriers would represent a significant loss of consumer choice, and I think it would also make the wireless industry less hospitable to innovation. T-Mobile’s relatively open policies serve as an escape hatch for both consumers and handset vendors whose needs are not being met by the larger carriers.

But against these concerns, some of my fellow libertarians offer a compelling counterargument: let the free market work. It’s hard to predict how the mobile market will evolve, but there are strong economic and moral reasons to think that the unfettered free market will produce better outcomes than government meddling in the private sector.

As a libertarian, this is an argument I take very seriously, but I think it’s misguided here. To understand what’s wrong with it, we need to go all the way back to one of the founders of classical liberal thought, John Locke. In Chapter 5 of his Second Treatise of Government, Locke articulated a moral theory of property rights that continues to be influential to this day:

Though the earth, and all inferior creatures, be common to all men, yet every man has a property in his own person: this no body has any right to but himself. The labour of his body, and the work of his hands, we may say, are properly his. Whatsoever then he removes out of the state that nature hath provided, and left it in, he hath mixed his labour with, and joined to it something that is his own, and thereby makes it his property. It being by him removed from the common state nature hath placed it in, it hath by this labour something annexed to it, that excludes the common right of other men: for this labour being the unquestionable property of the labourer, no man but he can have a right to what that is once joined to, at least where there is enough, and as good, left in common for others.

In Anarchy, State, and Utopia, Robert Nozick dubbed this last caveat the Lockean Proviso. Taken literally, the proviso doesn’t make much sense. Surely, American property claims didn’t all become illegitimate the day the frontier closed and there was no longer land “left in common for others” to homestead.

Still, I think the proviso captures an important moral intuition. The legitimacy of a property rights system depends on it being open to everyone. True, we’ll never have a society in which everyone is a landholder. But our system of land ownership gives everyone the opportunity to purchase land at market rates. And the diversity of land titles means that those who don’t own themselves have many landlords from which to choose.

And this is an important safeguard for liberty. In a society with highly concentrated land ownership, people with unusual or unpopular housing needs—interracial couples in the 1950s, gay couples in the 1980s, people who want to throw loud parties or own unusual pets—might have trouble finding housing that allowed them to live in the way they chose. But in a world with thousands of landlords, almost everyone can find somebody willing to rent to them, no matter how unusual their demands might be. The diversity of landholdings doesn’t just hold rents down, it has direct implications for individual liberty.

Land ownership has been so decentralized for so long that we aren’t in the habit of thinking of it as an issue of liberty. But it is. A real estate market in which three landlords owned all the land would be less free than a real estate market with 3000 landlords. And the same is true of most other natural resources. The liquidity of commodity markets protect our rights to do as we please with oil, gold, copper, and other natural resources we purchase. My gas station doesn’t try to dictate what brand of car I drive because it knows there are lots of other places I can buy gasoline.

Spectrum is different. If I want to use the electromagnetic spectrum in a novel way, at power levels above those allowed by the unlicensed bands, I need to buy (or more likely rent) spectrum from someone. And in the contemporary American market, there are only a handful of firms to choose from. These firms are vertically integrated and place tight restrictions on what kinds of signals can be transmitted.

In other words, there is not “enough, and as good” spectrum “left in common for others” to use for their own purposes. A handful of parties have claimed for themselves all the available spectrum and tightly constrain how it’s used.

It’s not obvious what should be done about this. Maybe the distinctive characteristics of spectrum make it impossible to allocate in a way that’s consistent with the Lockean Proviso. There are economies of scale in mobile service, and so I don’t expect we’ll ever have dozens—to say nothing of thousands—of wireless carriers.

But one thing the government can do is make sure the problem doesn’t get worse. As I mentioned in my last post, the Clinton FCC used to prohibit any single company from holding too large a share of the spectrum available for use by mobile phone companies. The Bush administration dropped this rule, and the Obama FCC has not resurrected it. I think they should.

The debate over the merger has largely focused on whether prices in the post-merger world will be higher than they are now. That’s a relevant question, but I don’t think it’s the most important one. The real dangers of the merger is to the liberty that the Lockean Proviso is designed to protect. The merger will harm consumers who are no longer free to use wireless spectrum in ways allowed by T-Mobile but not AT&T. And it will harm future innovators who are unable to find a spectrum owner willing to allow their innovations on its network.

Posted in Uncategorized | 11 Comments

New Directions

For the last three years I’ve been a computer science grad student at Princeton’s Center for Information Technology Policy. I received a master’s degree late last year, and was working toward my PhD. CITP is full of talented people doing important work. I’ve learned a lot and became close friends with many of my colleagues. It’s been a particular privilege to study under my brilliant advisor, Ed Felten.

The function of a computer science PhD program is to prepare you for a career doing academic computer science research. I went to grad school expecting to excel at this, but in recent months it has become clear to me that my talents are better suited to writing about public policy. So I’ve asked Princeton for a one-year leave of absence. I’ll have an option to return to school in 2012, but I don’t expect to exercise it.

For you, my readers, this means you’ll be hearing from me more often. I’ve committed to spend the majority of my time writing for Ars Technica, the best technology news site on the web. I’ll link to some of those articles here, and I’m also hoping to do more original writing here at Bottom-up.

For the editors in the audience: my relationship with Ars leaves me some time to work on other projects. So if you’d like to pay me to write for you, please get in touch.

Posted in Uncategorized | 8 Comments