Michael Heller vs. Richard Epstein on the Gridlock Economy

234944758_e081ae9afdOn Friday Jerry Brito’s new tech policy program at Mercatus sponsored a debate between Michael Heller, author of The Gridlock Economy, and Richard Epstein. The argument of Heller’s book, which he presented in his talk, is that poorly-designed property regimes can lead to situations where a single resource is subject to numerous overlapping claims, making it extremely costly to get permission to do anything useful with it. For example, Heller talks about Eyes on the Prize, a famous documentary on the civil rights movement. Eyes on the Prize, like most documentaries, includes excerpts of hundreds of copyrighted works—music, speeches, video snippets, graphics, etc. The creators of the documentary secured the right to those works for the initial release, but the license agreements they secured didn’t apply to new media. As a consequence, it would be virtually impossible to release Eyes on the Prize on DVD or put it on the Internet, because tracking down hundreds of now-fragmented rights would be prohibitively expensive.

Heller argues that this pattern is repeated elsewhere in the economy. I’ve written before about the way the patent system impedes progress in software by subjecting potential innovators to a large number of overlapping patent claims. And a similar problem has been created by the growth of CDOs in the mortgage market. Traditionally, a borrower who got behind on his payments could call up his bank and re-negotiate the terms of his mortgage. But if your mortgage has been bundled into a CDO, the rights to your mortgage are going to be fragmented among dozens or even hundreds of parties. There’s no one with the authority to change the mortgage terms. As a result, homeowners wind up in foreclosure even if it would have been better for both borrower and lender to re-negotiate the terms of the mortgage.

Heller’s argument strikes me as clear, correct, and totally uncontroversial from a libertarian perspective. Heller’s book is in the same vein as Hernando de Soto’s work on the early history of American property rights, which also focused on the ways poorly-designed property rights impeded progress. Creating a well-functioning property rights system is difficult, and a property regime can err in the direction of creating too many property-like claims just as it can err in recognizing too few.

Libertarian legal theorist Richard Epstein had a puzzling response: he spent the bulk of his talk arguing that big government creates more problems than gridlock. In a wide-ranging discussion, he talked about collective bargaining, minimum wage laws, the Food and Drug Administration, and land use regulations, showing in each case how excessive regulation is responsible for a variety of ills. I’m sympathetic to Epstein’s analysis in all of these cases, but I was confused about why he was making these points in a debate with Heller, who hadn’t said anything in defense of big government in his talk.

I think Heller responded in exactly the right way: he endorsed most of what Epstein said and then pointed out that problems can (and usually do) have more than one cause. Gridlock and big government are not mutually exclusive explanations for the world’s problems. And he emphasized that his focus was on analysis rather than advocacy: that his goal was less to push specific solutions than to convince people that there was a problem.

434724008_b69abec94aEpstein seems to have fallen victim to “if you’ve got a hammer, everything looks like a nail” syndrome. His life’s work has been focused on the debate between free markets and big government. In those debates, he has consistently (and brilliantly) defended the pro-property side. And so when he’s approaching a new problem, his instinct is to side with whichever perspective seems to be advocating more property rights, and to assume that the other side must be defenders of big government. This framework doesn’t leave much room for an argument like Heller’s, which tries to draw nuanced distinctions among possible property-like regimes. Heller isn’t “anti-property” in general, and he doesn’t necessarily advocate new state interventions as the solution to the problems he identifies. But criticizing big government is Epstein’s strong suit, so he plays it even though it doesn’t seem terribly relevant.

As we saw last week, the danger of depending too much on a high-level intellectual framework is that you won’t notice when the facts on the ground have changed in ways that don’t fit with your theory. Building an operating system or a documentary film isn’t like building a skyscraper, and this fact means that the analytical tools you honed on problems in real property law may not work as well for patent or copyright issues. But you’ll only notice if you’ve invested the time to understand how the affected industries work in some detail.

Heller was on the guest on a recent episode of David Levine’s excellent “Hearsay Culture” radio show/podcast.

Posted in Uncategorized | Leave a comment

Privacy Norms Should Come Before Privacy Laws

My colleagues Jim Harper and Julian Sanchez have been having a friendly debate over privacy regulations, which Julian summarizes over at TLF. Jim eschews his customary blogging parsimony in favor of a lengthy treatise on online privacy. The heart of their disagreement is captured in this paragraph from Julian’s post:

Evolutionary mechanisms are great, but they’re also slow, incremental, and in the case of the common law typically parasitic on the parallel evolution of broader social norms and expectations. That makes it an uneasy fit with novel and rapidly changing technological platforms for interaction. The tradeoff is that, while it’s slow, the discovery process tends to settle on efficient rules. But sometimes having a clear rule is actually more important—maybe significantly more important—than getting the rule just right. These features seem to me to weigh in favor of allowing Congress, not to say what standards of privacy must look like, but to step in and lay down public default rules that provide a stable basis for informed consumers and sellers to reach their own mutually beneficial agreements.

I think Jim’s response gets this exactly right:

As so many have before him, Julian asks for an “ordinary-language explanation” of what is going on. But we don’t yet have a reliable and well-understood language for describing all the things that happen with data. Much less do we know what features of data use are salient to consumers. Many blame corporate obfuscation for long, confusing privacy policies, but just try describing what happens to information about you when you walk down the street and the difficulty with writing privacy policies become clear.

To put this slightly differently, I think Julian is wrong to think that common law is somehow special in its dependence on “social norms and expectations.” In reality, all laws are dependent on underlying social norms. A law that’s out of touch with them will be ignored and evaded regardless of how it emerged.

Julian suggests that the common law process is a poor fit for “novel and rapidly changing technological platforms,” but I think just the opposite is true. He wants “ordinary-language explanations” of what websites do with personal data, but I think he’s underestimating how difficult that is. Users’ expectations—and their “ordinary-language” vocabulary for talking about those expectations—evolves in parallel with the technology itself. Predicting the evolution of language or culture relating to a technology is no easier than predicting the evolution of the technology itself. The typical policymaker in 1994 would not have been able to predict any more than he would have been able to predict the emergence of Facebook or GMail themselves.

If you doubt that developing a vocabulary for privacy is difficult, I encourage you to read about the history of the P3P project. I’m old enough to remember when P3P was an up-and-coming standard with broad industry and academic support. The idea was to agree on a standard, machine-readable format for describing websites’ privacy policies. Once this was accomplished, browser manufacturers would be able to add mechanisms to automatically notify users when they visited a site whose privacy policy didn’t live up to the user’s standards.

P3P is now effectively dead. Its failure is a complex, multi-faceted story, but one of the most important factors was that encoding meaningful privacy disclosures in precise, machine-readable formats turned out to be a lot harder than people expected. There is an almost unlimited number of possible permutations for the way website might use a customer’s data, and describing them in a finite number of standard categories necessarily meant lumping together a lot of different behaviors under the same category. In essence, the P3P team was trying to drain all the nuance out of what was still a complex and rapidly-changing set of social norms. It didn’t end well.

2355790455_9d135317e9A federal disclosure mandate isn’t as bad an idea as comprehensive privacy regulations. But it would still run afoul of the same basic problem: legislators will need to guess what future technologies will look like, and they’re likely to guess wrong. Once the federal government has declared which facts must be disclosed and what the acceptable categories are, website operators will be required to disclose those facts whether or not consumers find them useful. And even if they also disclose other aspects of their privacy policies that users do find useful, those facts will be buried in boilerplate. Worst of all, statutory codification of privacy rules will warp the evolution of common-law rules, so that if public norms do finally gel, it will take longer for the law to catch up with them.

Posted in Uncategorized | 1 Comment

The Class Action Loophole

Tim Wu is one of my favorite technology thinkers, and he’s a talented writer, so his take on the Google deal is a good read. But I also think it’s a good example of what’s wrong with a lot of analysis of the GBS deal.

The phrase “fair use” doesn’t appear once in the article. In case we’ve forgotten, this is a copyright infringement case. The dispute between Google and the plaintiffs is not about orphan works, online book sales, or the structure of the publishing industry. It’s about whether copyright’s fair use doctrine allows the creation of a book search engine that displays “snippets” of in-copyright books in search results. Google says yes. Some publishers and authors said no. Absent a settlement, a judge would have been asked to rule on that question.

In a rational world, the settlement of the case would focus on that same question. Instead, we got a settlement in which the underlying infringement claims are treated as an afterthought. Instead, the focus is on the creation of an elaborate new structure for selling books online. It’s as if Sony Pictures sued NBC for copyright infringement and then wound up with a “settlement” that focused mostly on Sony becoming a partner in GE’s light bulb business.

This would, of course, be completely crazy in an ordinary lawsuit between two companies. If Sony and GE want to go into the light bulb business together, they don’t need a judge’s help to do so. The only reason to bundle a business deal like that into a judicial settlement is if it gives you the power to do stuff you wouldn’t be able to do otherwise. Such as using the class action mechanism to bind thousands of copyright holders who wouldn’t consent otherwise.

I can understand why folks like Prof. Wu are excited about the opportunity this case creates. The orphan works problem is real, and the legislative process is long, tedious, and messy. The class action mechanism gives advocates of orphan works reform a kind of deus ex machina: dramatic reform without the kind of cynical horse-trading that normally comes with legislation. Moreover, because of the way the settlement process works, legal academics like Profs. Wu, Grimmelmann, and Picker probably have significantly more influence over the outcome of the process than they do over orphan works legislation in Congress. And I don’t necessarily regard that as a bad thing: if I had to pick three people to re-shape the publishing industry, they’d all be on my short list.

But we’re a democracy, not a nation ruled by enlightened philosopher kings. Wu warns the judge to “be careful not to open it up to so many parties that the whole thing explodes.” But if the whole thing will “explode” if it’s opened up “to so many parties,” that seems like a sign that some of those parties are getting short-changed. Which is precisely why we normally require that proposals affecting the rights of millions of people be drafted in public by Congress, not by private parties in a smoke-filled room. The class action mechanism offers legal academia a tantalizing loophole: a chance to achieve legislative ends through a comparatively clean and simple judicial process. But the fact that Congress is “open to so many parties” and responsive to their concerns is a feature, not a bug.

Posted in Uncategorized | 3 Comments

Free the DC Metro Data

3796692339_009bb6bdc8

I started this blog because I like writing about how bottom-up thinking can make the world a better plice. But one thing I like better than writing about it is when someone else does it for me. I hope David Alpert doesn’t mind if I quote liberally from his post at the excellent Greater Greater Washington blog:

Bottom-up thinking upset the established order when it hit the software industry in the form of open source software, and it’s even more revolutionary in an agency like Metro, which tends to approach issues from a top-down point of view. Need some new railcars? Bid out a contract. Want to create an online system to track bus locations? Bid out a contract. For railcar procurement, there’s nothing wrong with this strategy. But for consumer information technology, where you don’t need only one type of railcar, this approach fails to stimulate innovation.

Opening up data allows both large companies and small “garage” developers to build applications. The policies of an organization affect both, but the economic forces affecting these are very different. If a larger company is going to work with Metro, they’ll probably only do it if there’s some money in it, which means they’re willing to spend some lawyer time upfront to negotiate a good contract. Transaction costs aren’t good, but they won’t necessarily derail the project entirely.

236216358_18b34d407eA garage developer, on the other hand, is probably doing the project in his spare time, for fun. Even if there’s the possibility of making some money, such as selling the app for $5 a pop in the iPhone app store, it’s not going to be a major source of profit. Most likely, those fees won’t even come close to compensating the author for his or her time. If he’d put the same amount of time into working for a tech company, he’d make way more. He might even have made more working at McDonald’s than spending the equivalent amount of time on the application.

This is one fallacy in Gordon Linton’s admonishment that someone out there might be “lining their pockets.” Perhaps sometimes that’s the case, but most of the time they’re lining those pockets with enough to buy a nice lunch.

Because the money is a secondary consideration at best, the transaction cost is a huge deterrent. If the developer has to even spend one afternoon negotiating with Metro, it’s a big burden. To Metro, it’s no big deal to put weeks into carefully assembling a deal. To the developer, the thing could have been done already. Therefore, most people won’t even bother. There are plenty of neat ideas out there that could make a good app. Why build the one that forces you to waste a lot of time not programming when you can just start coding on something else? Programmers want to be programming, not negotiating with bureaucracy.

I couldn’t have put it better myself. Don’t miss part 2 and part 3 of David’s series on the battle to open up data about the DC metro system.

The bottom-up argument David is describing is the basis for RECAP, which I described here. The Los Angeles Times recently ran an article that looks at three key figures in the movement for web-enabled government transparency and includes a nice RECAP mention. And here is a paper by several of my colleagues making the theoretical argument for a bottom-up regime for the presentation of public data.

Posted in Uncategorized | Leave a comment

GBS and the Trouble with Class Actions

For the last few weeks, James Grimmelmann has been the go-to source for news and analysis of the Google Book Search case. In a recent post, he takes an in-depth looks at the various parties who have sought to intervene in the case. Two organizations, the American Society of Media Photographers and the National Writers Union, have objected to the way the plaintiff class—that is, the set of all book copyright holders who are registered in the US—is being represented. They want “new negotiations with the many voices that have up to now been excluded.”

3287986172_f7f153f5beGrimmelmann argues that as a matter of class action theory (governed by “Rule 23”) these parties are not entitled to a seat at the table. Either they’re in the class, in which case they’re officially represented by the pair of law firms that represent the class as a whole, or they’re outside the class, in which case their rights are not affected by the settlement. If they’re in the class, they’ll have a right to object to the new deal once it’s unveiled. But Grimmelmann explains why, theory aside, parties who don’t get a seat at the table are unlikely to be satisfied by a right to object after the deal is announced:

Game theory favors the agenda-setters. Those who draft the settlement can pick exactly what it says; that enables them to select the terms that most favor themselves. Objectors, at best, have a shot at convincing Judge Chin to reject the settlement. Even Judge Chin’s veto power is a crude tool; he can’t sculpt the details of the settlement with it. If you get a seat around the negotiating table, you have a lot more ability to pull the settlement in the direction that you want. This, by the way, is one of the reasons I believe that the settlement requires searching review. Its proponents are the ones who picked each and every detail; the approval process should be designed to give them an incentive to get as many details as possible right, rather than the bare minimum.

I think this analysis is spot on. And I’m inclined to reach a somewhat stronger conclusion than he does: the danger that class counsel will negotiate the rights of some class members away for the benefit of other class members is so large that courts just shouldn’t approve classes as large and heterogenous as this one.

The class action mechanism is supposedly justified as a matter of administrative efficiency: if you’ve got a bunch of plaintiffs who’ve all been injured in similar ways, it’s inefficient to have each of them go through largely the same procedure to reach largely the same outcome. But this rationale only makes sense if we have some confidence that the class action mechanism will produce an outcome roughly comparable to what individual lawsuits would have produced. As the plaintiff class gets larger and more diverse, it becomes more and more likely that the outcome will be driven by the details of the class action mechanism rather than the underlying legal issues in the cases.

In this case, there’s no doubt that the class action mechanism is affecting the substantive outcome because one of the factors commonly cited in favor of the settlement is that it ropes thousands of orphan work authors into the settlement. This strikes me as inherently abusive of the class action mechanism. By definition, holders of orphan works can’t sue on their own behalf, and it’s not clear how you’d go about appointing a lawyer to represent their interests. Almost by definition, the settlement expropriates orphan work holders for the benefit of the other plaintiffs.

This might seem like procedural nitpicking, but I think it points to a deeper problem with using the class action mechanism in this fashion. There probably isn’t any one agreement that will satisfy most members of the proposed plaintiff class. Regardless of how you structure the negotiations, some class members will get a raw deal. And I don’t think it’s ever appropriate to short-change the rights of some plaintiffs in order to reach an outcome we might regard as desirable on policy grounds.

Posted in Uncategorized | 2 Comments

The Semi-Vibrant Internet

The Washington Post has an editorial opposing network neutrality. Berin Szoka likes it. Tom Lee doesn’t. Tom says it’s misleading to talk about a “vibrant and well-functioning marketplace” for connectivity:

In truth, it’s stagnated: in North America, prices remain steady — at a rate above what much of the developed world pays — while ongoing improvements in speed show little hope for of catching up with the networks of countries significantly poorer than the US. If the Post wants to argue that net neutrality will make the situation even worse, fine. But they don’t even seem to realize that by global standards our domestic broadband marketplace is an underperformer.

I haven’t analyzed the cross-country data closely, but I do think that network neutrality opponents sometimes exaggerate the level of competition in the broadband marketplace (conversely, network neutrality supporters sometimes understate it). Two competitors is much better than one, but it’s not as good as four or six or twenty.

But I think it’s a mistake to focus too much on residential broadband access. Let’s say Tom’s right that the the residential broadband market is a stagnant duopoly in which American firms lag far behind their peers in other developed countries. This might be an argument for mandating network neutrality for residential broadband. But Genachowski is proposing something much more ambitious. He appears to want to regulate, among other things, the mobile market and the Internet backbone. The wireless market is significantly more competitive than the wired market (4-6 national carriers depending on how you count), and the backbone and commercial connectivity markets are even more competitive than that, with dozens of firms in direct competition with one another.

The fact that Comcast has only one serious competitor in Philadelphia is not an argument for regulating Level 3 or T-Mobile. If the argument for network neutrality has to do with a lack of competition, then any regulations that get passed should be focused on the areas where competition is lacking and can’t easily be increased. The backbone market is extremely competitive already, so it’s hard to see an argument for expanded regulation there. And if the wireless market is uncompetitive, the right policy response is opening up more spectrum, not burdening the spectrum holders we already have with red tape.

Posted in Uncategorized | 2 Comments

A Surprisingly Free Conversation

Jerry Brito is not only a longtime friend, he’s also one of the sharpest people working in tech policy today. So I was flattered when he asked me to be the first guest on Surprisingly Free Conversations, his new weekly podcast. You can check out the result here. We cover a lot of ground that will be familiar territory to Bottom-Up readers: the evolutionary psychology of bottom-up processes, disruptive innovation, the decline of newspapers, and Paul Graham’s observations about the economics of content.

I’m also excited that this podcast represents the low-key launch of the Mercatus Center’s new tech policy program, which Jerry will be leading. Their blog is here, and looks like it’ll be a great addition to my RSS reader. I’m looking forward to seeing what other great stuff comes out of the new center.

Posted in Uncategorized | Leave a comment

Thanks to IHS and the Searle Foundation

Among the income sources I mention on my disclosure page is the Institute for Humane Studies, a libertarian-leaning organization that provides fellowships to grad students doing public policy work. I recently received my award for 2009, and learned that it was made possible by the Searle Freedom Trust. The trust was created by Dan Searle, a successful businessman and philanthropist who passed away in 2007. You can read about Mr. Searle here.

This weekend, I attended a research colloquium for about 40 Human Studies Fellows, at which each participant got a few minutes to tell other grad students about their research. It was an interesting experience because the vast majority of the participants were economists, political scientists, or philosophy. My talk, on RECAP, was the only one that talked about a software project rather than more traditional social science research.

IHS is a great organization, and not just because they support the work of grad students like me. If you’re an undergrad, I can’t recommend their summer seminars highly enough. I attended one as an undergrad and it was one of the most intellectually stimulating weeks of my undergraduate career.

Posted in Uncategorized | Leave a comment

Radia on Network Neutrality

My former co-blogger Ryan Radia has an excellent op-ed on the network neutrality debate. I particularly liked his discussion of the relative merits of open platforms:

In the battle between open and closed devices, wireless subscribers have voted with their wallets. So far, they have preferred the iPhone over open source devices like the “Google phone.” In the intensely competitive wireless market, the iPhone’s success shows that innovation can occur, and even thrive, within the confines of proprietary ecosystems like the iPhone.

But under the FCC’s proposed neutrality rules, the iPhone and similar devices that place limits on the content and applications that users can access would likely be against the law.

To be sure, the virtues that neutrality proponents espouse– open access, transparency, democracy, and the like– are all legitimate, even important values. Arguably, the open nature of the Internet has been instrumental in fostering many of the innovations that consumers enjoy today. But it is wrong to assume, as neutrality proponents do, that today’s “capital-I” Internet is the end all, be all network, and that the future of global communications ought not include some proprietary elements.

Technological innovation is an unpredictable beast. Networks for transmitting data that have yet to emerge– so-called “splinternets”– may well reshape the nature of global communications in years ahead. One need only look to the FCC’s widely criticized telephone and cable regulations to witness how rigid federal mandates can thwart high-tech evolution and steer the market in unnatural directions.

I think my sympathies are a little more on the “open” side of the debate than Ryan’s are. But I think this strikes the right tone from a policy perspective. If I’m right that open networks are superior to closed ones, then they’re likely to succeed in the marketplace without or without government help. On the other hand, if I’m wrong, there’s a real danger that regulating now will lock in an architecture that we might want to see change in the future. Either way, the right thing to do is to wait and see how the market evolves, not regulate based on highly speculative harms. Ryan’s piece is worth reading in its entirety.

Posted in Uncategorized | 5 Comments

Immigration Is a Civil Rights Issue

Cato’s Center for Trade Policy Studies released a study last month on the economics of expanded immigration to the United States. Economists Peter B. Dixon and Maureen T. Rimmer use a general equilibrium model to predict how various policy changes would affect the US economy. In two of the seven scenarios examined, the federal government further cracks down on illegal immigration, reducing the number of illegal immigrants living and working in the United States. In the other five scenarios, the federal government liberalizes immigration by creating a variety of “guest worker” visas that would give more immigrants the potential to be legally-sanctioned guest workers rather than illegal immigrants.

They conclude that the two restrictive scenarios would reduce the average income of US households, while liberalization would increase the incomes of US households. And the effects are significant. Cracking down on illegal immigrants is projected to cost American households about $80 billion, while liberalization could generate as much as $180 in higher incomes for US households. I’m not an econometrician, so I can’t evaluate their economic model in any detail, but the basic logic here seems sound. One key point is that bringing new, mostly unskilled workers into the economy will create new managerial and professional jobs that will generally require the kind of experience and English fluency that most new immigrants lack. So even if immigrants exert downward pressure on the very low end of the wage scale, many native-born workers would benefit from the new opportunities created by increased immigration. Dixon and Rimmer find the net effect on American households (excluding the immigrants themselves, who obviously benefit tremendously) is positive.

135342084_cbd9ffc463So I liked the study. But there were a few parts that made me wince, especially given that it was published by the Cato Institute. In particular, the study talks about the need to “facilitate the transfer to U.S. households of part of the guest-worker surplus,” and later about the increased wealth created by migration being “extracted for the benefit of U.S. households.” To put this in somewhat blunter terms: Dixon and Rimmer are proposing to take money from the Hispanic woman scrubbing toilets for $10 an hour and transfer it to Joe the Plumber. This proposal doesn’t exactly warm my heart.

It’s not hard to understand their motivation. Increased immigration is a generally win-win proposition that’s being blocked by a nativist backlash. I assume the idea is to buy off some of those angry American voters by promising them a big chunk of the “surplus” generated by immigration reform. And as a matter of blackboard economics, this makes sense. The woman scrubbing toilets is still better off than she would have been if she’d been forced to stay in Guatamala (or even perhaps as an undocumented worker). And the American worker not only gets an opportunity to take one of those new managerial jobs that got created, but he also gets some nice government benefits to boot.

I think the problem with proposals of this sort (beyond the basic offensiveness of Robin-Hood-in-reverse schemes) is that they misread the politics of the issue. The anti-immigration movement is driven by deep and often irrational fears. The kinds of people who make hatred of “illegals” a key part of their political identity are not looking for a slightly larger slice of the economic pie, and they’re not going to be persuaded by earnest economists with complicated models.

At the same time, this kind of narrow, technocratic approach is likely to turn off some of the natural allies of broad-based immigration reform: liberals whose primary concern is for the immigrants themselves. The political left is split between liberal elites who see immigration as fundamentally a human rights issue, and a populist faction, led by the labor movement, that sees immigration as a threat to American jobs. The populists have gotten tremendous traction by arguing that immigration reform is really a conspiracy by big business to undercut the wages of American workers by importing and exploiting low-skilled immigrants. I don’t think this is true, but publishing studies focusing on the need to “extract” the fruit of foreign workers’ labor for the benefit of Americans doesn’t do much to dispel this impression.

177567819_ae3a90922dI think a far more effective approach is to use what is probably the most powerful weapon in American politics: our now deeply-rooted and emotional commitment to the principle of equality before the law. Over the last 50 years, American society has undergone wrenching transformations that moved us toward equality for Catholics, blacks, Jews, women, gays and lesbian, and other traditionally disfavored groups. We achieved these reforms not by emphasizing how reform would benefit straight white men, or by building complex models of how oppression depressed GDP, but by focusing on the cruelty of the status quo and appealing to America’s founding ideals. We’ve now reached the point where opponents of equality for blacks or Jews are not only in the minority, but are among the most despised people in society.

I think the same strategy needs to be employed on behalf of immigration reform. The problem with our immigration laws is not primarily that they are economically inefficient (Jim Crow wasn’t efficient either). The problem is that they deny civil rights to millions of hard-working individuals based on a factor over which they have no control: their place of birth. I’m sure Dixon and Rimmer mean well, but their narrow focus on the costs and benefits of immigration to American households not only ignores powerful arguments about justice, it actually undercuts them by accepting the premise that we’re justified in ignoring the welfare of the millions of people who are in such deep poverty that they’re willing to risk their lives for the privilege of picking our strawberries and scrubbing our toilets.

Posted in Uncategorized | 11 Comments