Entrepreneurship and Hayekian Discovery

Brink Lindsey points me to this great article on how entrepreneurs think. It turns out that many entrepreneurs hate the concept of market research. Rather than trying to predict the overall size of the market in advance, their approach is to build a product that’s useful to a few customers, and then rapidly improve the product based on feedback from the initial customers. Brink’s take on this is spot-on:

Entrepreneurs grasp intuitively the central insight of the great economist F. A. Hayek: that capitalism is a process of discovery. Hayek saw that socialist central planning, then at the height of intellectual fashion, was doomed to founder on the unpredictability of the future. Capitalism, at the time derided for its chaotic duplicativeness, worked precisely because of its messiness: its decentralized process of trial-and-error experimentation is the only viable response to the ineradicable uncertainties of economic life.

Entrpreneurs are Hayekians at the micro level. They don’t want to sit back and plan, they want to dive in and discover and learn. They want to experiment: to see what works and what doesn’t, to build on the successes and leave the failures behind. Which is exactly what how the larger market order works at the macro level.

None of this is to say that planning is unnecessary. On the contrary, it’s vital — after you’ve discovered a good idea. To take that idea to scale and execute it efficiently — in other words, to pump out those Swedish meatballs — you need planning and lots of it. Which is why successful start-ups turn into big corporations run by professional managers.

Quite so. The distinction between entrepreneurs and managers and crucial, and it’s often overlooked. The skills needed to create a profitable 10-person company from scratch are very different from the skills needed to keep an existing 10,000-person company running smoothly. There are some people, like Bill Gates, who are good at both. But once the founder leaves, the people who take over for him tend to be cut from different cloth. They tend to be managers who earned MBAs and worked their way to the top of existing corporate hierarchies. They tend to be more process-oriented and risk-averse. I bet Brink’s description fits 22-year-old Bill Gates, but it probably doesn’t describe Tim Cook or Carly Fiorina.

Posted in Uncategorized | 3 Comments

The Innovator’s Dilemma in Higher Education

Matt Yglesias points me to a a new report on the future of higher education from the Center for American Progress. The report has Clay Christensen, the author of The Innovator’s Dilemma, as its lead author. Not surprisingly, he leans heavily on his own concept of disruptive innovation, which I’ve blogged about in the past. He argues that online learning threatens to undermine the business model of traditional universities as disruptive innovations have done in many other industries:

For decades now [universities] have offered multiple value propositions around knowledge creation (research), knowledge proliferation and learning (teaching), and preparation for life and careers. They have as a result become conflations of the three generic types of business models—solution shops, value-adding process businesses, and facilitated user networks. This has resulted in extraordinarily complex—some might say confused—institutions where there are significant coordinative overhead costs that take resources away from research and teaching.

I’m probably biased since I’m currently employed in the industry under discussion, but this seems wrong-headed. In particular, dividing universities into separate “business models” that are then analyzed for economic efficiency is reductionist and myopic. Obviously, one of the benefits of a college education is that you learn skills and knowledge that raise your subsequent earning potential. But to evaluate colleges based on the efficiency with which they convey particular bits of information to students is rather missing the point.

Many college students study subjects that bear little if any relation to the work they subsequently perform in the labor force. Even students who study a practical subject like computer science or chemical engineering wind up learning a lot of material not directly relevant to their subsequent careers, and they require a lot of on-the-job training when they graduate. Indeed, the software industry is full of people who studied something other than computer science in college, or don’t have college degrees at all.

So a traditional four-year college is a pretty inefficient way to learn career specific information and skills. Yet a college degree does seem to raise a student’s wages, even if he studies a subject unrelated to his subsequent career. And counterintuitively, academically-oriented universities and liberal arts colleges seem to improve their students’ career prospects more than more vocationally-focused community colleges. I don’t think anyone clearly understands why this happens, but this seems like a problem you need to wrestle with if you want to suggest ways to reform higher education. And Christensen doesn’t really do this.

My own tentative theory is that the primary function of an undergraduate education is to allow the student to join a scholarly community, and in the process to soak up the values and attitudes of that community. There are a variety of character traits—intellectual curiosity, critical thinking, self-direction, creativity—that are best learned by being immersed in a community where those traits are cultivated and rewarded. They’re not on the formal curriculum, but they’re implicit in much of what happens on a college campus.

Spending four years at a good college makes you a certain kind of person. A college graduate is more likely to read books in his free time, pay attention to spelling and grammar, know how to recognize and fact-check dubious statements by authority figures, juggle multiple deadlines, and so forth. And for a variety of reasons, people with these character traits tend to be good choices for white-collar jobs.

This kind of cultural transmission is really hard to accomplish via the Internet. An online course can probably teach you facts about history as well as a flesh-and-blood professor could do. But a website won’t exhibit the kind of infectious enthusiasm that turns students into lifelong history buffs. You can certainly learn computer programming from an online university, but it can’t seat you next to a guy who regales you with tales of his internship at Facebook. An online instructor can critique the half-baked paper you wrote at the last minute, but his critical comments won’t carry the same sting as they would if you had to meet him face-to-face.

Of course, there’s a lot of diversity in higher education. This kind of cultural argument may not apply as much to vocational schools that are more focused on teaching specific job skills. Also, older students probably wouldn’t benefit as much from—and probably wouldn’t have time for—four-year immersion in a college environment. Schools that cater to these types of students may face a more direct threat from Internet-based instruction models.

But I see no reason to think that the new Internet-based business models Christensen talks about will move very far “up market.” Above the basic vocational level, at least, an education is not a discrete product like a disk drive or a ton of steel. Much of the value of going to college flows from subtle positive externalities that emerge when you spend four years in close proximity to other people with similar interests, abilities, and values. It won’t be possible to replicate that experience via the Internet any time soon, and so I doubt most traditional 4-year colleges need to worry about the innovator’s dilemma.

Posted in Uncategorized | 16 Comments

City Planning and the Rule of Law

The excellent Greater Greater Washington blog endorses this video from the City of Beverly Hills, an impressive bit of filmmaking that devote a tremendous amount of effort to knocking over a rather silly straw man:

Riffing on It’s a Wonder Life, the film tells the story of George Buildly, a businessman who’s upset that the expansion of his store is being delayed by the need to have his plans reviewed by Beverly Hills’s planning and architectural commissions. George’s guardian angel appears and gives him a chance to see an alternate universe in which there is no city planning, no zoning, and anyone is free to do whatever they want with their property. In this alternate universe, George’s store sits adjacent to a pawn shop and a strip club, an adjacent business is telling his customers and employees to park in George’s parking lot, his home is next door to a shooting range, the town is full of tall buildings, and there are billowing black clouds in the background emitted by a nearby factory.

The video seems to be a refutation of an imagined libertarian critic of urban planning who believes we’d be better off with a government that played no role in resolving disputes over urban land use. You might be able to find an anarchist libertarian somewhere who subscribes to the view the video is lampooning, but this certainly isn’t the position that smart critics of excessive urban planning, libertarian or otherwise, take.

There are a number of different types of regulations (in a broad sense) local governments can enforce. The mainstream debate isn’t over whether regulations should exist, but over which types of regulation are most effective. For example, the case of a neighboring business parking in George’s parking lot is easily solved by property rights: people who park on his lot are trespassing and he should be free to tow them at the driver’s expense. You could describe this as a kind of regulation in the sense that it requires the government to resolve disputes over property boundaries and regulate the towing business, but it’s different from the government (for example) telling every business how many parking spaces it must provide.

Next you have cases like the shooting range and the polluting factory. The “hard-core” libertarian position in cases like this is that these disputes should be settled through the tort system: if your neighbor opens a shooting range next door, you can sue for an injunction and/or damages based on the nuisance this creates, and the dispute is settled based on long-established principles of property law. Again, this is “regulation” in some sense, but not the kind libertarians object to.

Now, squishy libertarians like me are perfectly ready to concede that this kind of case-by-case adjudication isn’t always efficient—you don’t want every new factory owner to face lawsuits from hundreds of nearby property owners the day he opens his factory, for example. So in many cases it makes sense for the government to preempt this kind of lawsuit and instead establish general rules (maximum noise and emissions limits, for example) designed to prevent neighbors from harming one another.

There’s yet another category of regulations that proscribes things like tall buildings and pawn shops. These regulations generally seem counterproductive to me. A tall building or a pawn shop doesn’t produce a particularly large amount of noise, pollution, or other externalities. Rather, incumbent landowners—especially wealthy and well-connected ones—use these kinds of regulations to keep the riffraff out of their neighborhoods. These regulations might be good for current residents, but they’re not good for the city as a whole since they just crowd unpopular populations into other parts of the city.

Still, even this class of regulation doesn’t give rise to a complicated approval process of the type George Buildly complains about at the start of the video. A businessman wanting to open a new store or restaurant might need to hire a lawyer, but once he does he should be able to figure out relatively quickly what kind of business he can have in any given location. The rules may or may not be good public policy, but at least the business owner doesn’t wind up mired in red tape for months.

Beverly Hills is implicitly defending regulations of a different character. This type of regulation typically requires a permit for any significant change to the way a property is used, and the criteria for approval are vague enough to give city officials essentially arbitrary party to decide what gets built where. Often, this kind of regulation seems designed simply to flatter the egos of city officials. I suppose Georgetown’s Wisconsin Ave wouldn’t look quite the same with a glass and steel Apple store, but this hardly seems like the kind of aesthetic judgement that government officials should be making.

The basic issue here is about the rule of law: is property use governed by predictable rules that are applied consistently for all property owners? Or are property owners subject to the whim of city officials? Almost everyone agrees that regulations are needed to deal with genuine nuisances. And every one of the genuine nuisances mentioned in the video (noisy shooting ranges, polluting factories, trespassing) can be dealt with in a manner that’s consistent with the rule of law. But none of the examples in the video explain why it’s necessary for George Buildly to delay the opening of his store while city officials ponder the merits of his application.

Posted in Uncategorized | 8 Comments

F. A. Hayek, Liberal

A couple of months ago I wrote a post for the Technology Liberation Front offering a qualified defense Tim Wu’s book, The Master Switch. My erstwhile colleagues at TLF had taken turns lambasting the book for what they regarded as its retrograde big-government liberalism. I suggested that they were focusing too much on the rather tentative policy recommendations at the end of the book, and ignoring the excellent history and economic analysis that accounted for the first 200 pages or so of the book. And I thought that my libertarian friends were too dismissive of Wu’s central thesis: that excessive concentrations of corporate power, often with the active assistance of government, posed a real danger to individual liberty.

This conversation came to mind yesterday as I was reading F. A. Hayek’s classic essay “‘Free’ Enterprise and Competitive Order.” The essay, written for the 1947 Mont Pélerin meeting, was Hayek’s attempt to sketch out a postwar intellectual program for a liberal movement that was at that time tiny and deeply unpopular among the intellectual elite. (I’ll follow Hayek in using the term “liberal” throughout this post, but he was of course addressing classical liberals)

Hayek’s argument was that by framing their political program primarily in negative terms—as a list of things the state ought not to do—the liberals of his time had ceded major swathes of intellectual territory to their ideological opponents. He writes:

Where the traditional discussion becomes so unsatisfactory is where it is suggested that with recognition of the principles of private property and freedom of contract, which indeed every liberal must recognize, all the issues were settled, as if the law of property and contract were gien once and for all in its final and most appropriate form.

He then offers the following examples, among others, of issues that liberals ought to care about:

  • Urban planning: Hayek writes that “there can be no doubt that a good many, at least, of the problems with which the modern town planner is concerned are genuine problems with which governments or local authorities are bound to concern themselves. Unless we can provide some guidance in fields like this about what are legitimate or necessary government activities and what are its limits, we must not complain if our views are not taken seriously when we oppose other kinds of less justified ‘planning.'”
  • Patents: Hayek argues that “a slavish application of the concept of property as it has been developed for material things has done a great deal to foster the growth of monopoly and that here drastic reforms may be required if competition is to be made to work.”
  • Corporate law: Hayek doesn’t think there’s much doubt that “the particular form legislation has taken in [the field of limited liability for corporations] has greatly assisted the growth of monopoly.” He goes on to argue that “the freedom of the individual by no means need be extended to give all these freedoms to organized groups of individuals, and even that it may on occasion be the duty of government to protect the individual against organized groups.”
  • Taxation: Hayek decries the confiscatory tax rates that were in effect at the time. But he also writes that “inheritance taxes could, of course, be made an instrument toward greater social mobility and greater dispersion of property and, consequently, may have to be regarded as important tools of a truly liberal policy which ought not to stand condemned by the abuse which has been made of it.”

I don’t think it’s much of a stretch to say that “‘Free’ Enterprise and Competitive Order” was a liberaltarian manifesto written almost 60 years before Brink Lindsey coined the term. Of coure, back in 1947 there was no need to coin a term because people understood what Hayek meant when he used the word “liberal.”

One of the more pernicious influences of Rand and Rothbard on the libertarian movement was their tendency to treat every policy problem as almost reducible to a logical syllogism. Too many libertarians act as though they don’t need to know very much about the details of any given policy issue because they can deduce the right answer directly from libertarian principles. The practical result is often to shut down internal debate and discourage libertarians from thinking carefully about cases where libertarian principles may have more than one plausible application. Hayek seems to have written “‘Free’ Enterprise and Competitive Order” with the explicit purpose to combat that kind of dogmatism. He thought it “highly desirable that liberals shall strongly disagree on these topics, the more the better.”

And one way to do this is to be more ready to treat modern liberals with bottom-up instincts as potential allies rather than ideological opponents. Regular readers of the blog may notice that all four of the issues listed above are topics I’ve focused on here on the blog. And there’s a substantial overlap between these issues and the program Matt Yglesias articulated a couple of weeks ago. And of course the second and third items on this list—the use of patents and the corporate form to entrench private monopolies—were at the heart of The Master Switch. Wu and Yglesias, in short, are engaged in precisely the kind of liberal intellectual project Hayek is calling for.

Posted in Uncategorized | 11 Comments

Bottom-Up Chat: Dara Lind and Immigration Reform

Regular readers know that we periodically do text-based chats using Envolve, a Facebook-style chat startup co-founded by my brother. Our next chat will be tomorrow (Wednesday) evening, and will feature special guest Dara Lind. By day she works for an immigration advocacy organization, but the views she expresses will be strictly her own. By night, she tweets, blogs, and guest-blogs in a variety of prominent places, most recently for the American Prospect.

The discussion will be driven by you, the readers. Besides immigration reform, other topics you might want to ask Dara about include gender, James Scott, and the prospects for liberal-libertarian cooperation inside the beltway.

Please join us tomorrow (Wednesday) night at 8 PM Eastern. To participate, just visit the home page and click on the “general chat” tab in the lower-right hand corner of your browser.

Posted in Uncategorized | Leave a comment

The Return of Bottom-up Liberalism

This week left-of-center bloggers have been abuzz over this lengthy treatise about the supposed absence of genuinely left-wing voices in the online conversation. Freddie DeBoer complains that the lefty blogosphere is dominated by “neoliberals” like Matt Yglesias, Jonathan Chait and Kevin Drum who show inadequate fealty to labor unions, big government, and the dictatorship of the proletariat.

What I find most interesting about DeBoer’s post is what it says about the successes of libertarian ideas over the last half-century. It has become a cliché in libertarian circles that we’re constantly playing defense against the ever-expanding welfare state. Yet if that were true, welfare state advocates like DeBoer wouldn’t be so gloomy.

I think DeBoer is basically right. We obviously don’t live in a perfectly libertarian world, but libertarians have had a pretty impressive winning streak in recent decades, especially on economic policy. Income tax rates are way down. Numerous industries have been deregulated. Most price controls have been abandoned. Competitive labor markets have steadily displaced top-down collective bargaining. Trade has been steadily liberalized.

Simultaneously, the intellectual climate has shifted to be dramatically more favorable to libertarian insights. Wage and price controls were a standard tool of economic policymaking in the 1970s. No one seriously advocates bringing them back today. The top income tax bracket in the 1950s was north of 90 percent. Today, the debate is whether the top rate will be 35 percent or 39 percent. There’s plenty to criticize about proposals for government-mandated network neutrality, but no one is seriously proposing that we return to the monopoly model of telecommunications that existed for most of the 20th century. At mid-century, intellectuals idealized large, bureaucratic firms like General Motors and AT&T. Today, intellectuals across the political spectrum argue that their preferred proposals will promote competition and foster the creation of small businesses.

This isn’t to say there are no longer disagreements about economic policy; clearly there are. But what’s striking is that the left’s smartest intellectuals and policy advocates now largely make their arguments from libertarians’ intellectual turf. Tech policy scholar Tim Wu explicitly casts himself as a heir to Friedrich Hayek, defending bottom-up competition against the monopolistic tendencies of large corporations. The urbanist left has increasingly focused on the (largely correct!) argument that we’d have a lot more walkable neighborhoods if not for government regulations that tilt the playing field toward suburban living patterns. Environmentalists have begun championing relatively free-market mechanisms like cap and trade as more efficient ways to achieve their goals. The policies being advocated aren’t always libertarian, but many non-libertarians sell their non-libertarian policy proposals using libertarian arguments.

Probably the best illustration of this is Matt’s response to DeBoer’s post. Matt lists 10 economic policy goals that he favors. What’s striking about the list is that about half of them are straight-up libertarianism (less occupational licensure, fewer subsidies for suburbanism) and there’s only one item on the list (“more redistribution of money from the top to the bottom”) that Milton Friedman would have strongly opposed. One way to interpret this is to say that Matt is a moderate libertarian with a redistributionist streak, but I don’t think that’s the right way to look at it. Rather, what’s happened is that liberalism in general has internalized key libertarian critiques of earlier iterations of liberal thought, with the result that a guy with a largely Friedmanite policy agenda can plausibly call himself a liberal. And actually, this shouldn’t surprise us at all, because Friedman called himself a liberal too.

Liberalism in the 19th century focused on opposing concentrated power and entrenched privilege, whether it was monarchy, slaveholding, or protectionism. In the 20th century, the American left became infatuated with concentrating power in the hands of democratically-elected governments. The libertarian movement arose to counter this trend and defend the original, bottom-up conception of liberalism. Since the fall of communism, the left has largely (though not entirely) backed away from its 20th century infatuation with central planning. And the result is what critics call “neoliberalism”: a left-of-center ideology whose egalitarianism is balanced by a healthy skepticism of concentrated power.

Posted in Uncategorized | 51 Comments

Reply to Hanson on Brain Emulation

Robin Hanson responds to my last post about simulating/emulating the brain:

To emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

This response confuses me because Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally.

The example of artificial ears seems particularly inapt here because the ears perform a function—converting sound waves to electrical signals—that we’ve known how to do with electronics for more than a century. The way we make an artificial ear is not to “scan” a natural ear and “port” it to digital hardware, rather it’s to understand its high-level functional requirements and build a new device that achieves the same goal in a very different way from a natural ear. We had to know how to build an artificial “ear” (e.g. a microphone) from scratch before we could build a replacement for a natural ear.

This obviously gets us nowhere when we try to apply it to the brain. We don’t understand the brain’s functional specifications in any detail, nor do we know how to build an artificial system to perform any of these functions.

With that said, the fundamental conceptual mistake he makes here is the same one he made in his EconTalk interview: relying too heavily on an analogy between the brain and manmade electronic devices. A number of commenters on my last post thought was I defending the view that there was something special, maybe even supernatural, about the brain. Actually, I was making the opposite claim: that there’s something special about computers. Because computers are designed by human beings, they’re inherently amenable to simulation by other manmade computing devices. My claim isn’t that brains are uniquely hard to simulate, it’s that the brain is like lots of other natural systems that computers aren’t good at simulating.

People didn’t seem to like my weather example (largely for good reasons) so let’s talk about proteins. Biologists know a ton about proteins. They know how DNA works and have successfully sequenced the genomes of a number of organisms. They have a detailed understanding of how our cells build proteins from the sequences in our DNA. And they know a great deal about the physics and chemistry of how proteins “fold” into a variety of three-dimensional shapes that make them useful for a vast array of functions inside cells.

Yet despite all our knowledge, simulating the behavior of proteins continues to be really difficult. General protein folding is believed to be computationally intractible (NP-hard in computer science jargon), which means that if I give you an arbitrary sequence of amino acids even the most powerful computers are unlikely to be able to predict the shape of the folded protein within our lifetimes. And simulating how various proteins will interact with one another inside a cell is even more difficult—so difficult that biologists generally don’t even try. Instead, they rely on real-world observations of how proteins behave inside of actual cells and then perform a variety of statistical techniques to figure out which proteins are affecting one another.

My point here isn’t that we’d necessarily have to solve the protein-interaction problem before we can solve the brain-simulation problem—though that’s possible. Rather my point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. Even in cases where we know all of the components of a system (amino acid sequences in this case) and all the rules for how the interact (the chemistry of amino acids is fairly well understood), that doesn’t mean a computer can tell us what the system will do next. This is because, among other things, nature often does things in a “massively parallel” way that we simply don’t know how to simulate efficiently.

By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable. And by that I don’t just mean “on today’s hardware.” The problems computer scientists call “NP-hard” are often so complex that even many more decades of Moore’s Law wouldn’t allow us to solve them efficiently.

Emulating a computer doesn’t involve any of these problems because computers were designed by and for human engineers, and human engineers want systems that are easy to reason about. But evolution faces no such constraint. Natural selection is indifferent between a system that’s mathematically tractable and one that isn’t, and so its probable that evolution has produced human brains with at least some features that are not amenable to efficient simulation in silicon.

Posted in Uncategorized | 26 Comments

Emulation, Simulation, and the Human Brain

On this week’s episode of the EconTalk podcast, Russ Roberts asked Robin Hanson on the show to discuss his theory of the technological singularity. In a nutshell, Hanson believes that in the next few decades, humans will develop the technologies necessary to scan and “port” the human brain to computer hardware, creating a world in which you can create a new simulated copy of yourself for the cost of a new computer. He argues, plausibly, that if this were to occur it would have massive effects on the world economy, dramatically increasing economic growth rates.

But the prediction isn’t remotely plausible. There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.

First a quick note on terminology. Hanson talks about “porting” the human brain, but he’s not using the term correctly. Porting is the process of taking software designed for one platform (say Windows) and modifying it to work with another (say Mac OS X). You can only port software you understand in some detail. The word Hanson is looking for is emulation. That’s the process of creating a “virtual machine” running inside another (usually physical) machine. There are, for example, popular video game emulators that allow you to play old console games on your new computer. The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What he means is that we’d emulate the human brain on a digital computer.

But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.

Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.

Posted in Uncategorized | 56 Comments

The Cycle that Wasn’t

Over at Ars Technica, I’ve got a review of Tim Wu’s The Master Switch. An excerpt:

By mid-century, each of these communications technologies [telephone, movies, radio, television] was in the grip of one or a few large companies. Yet everything began to change in the 1960s. Hollywood abandoned the Hays code in 1968, ushering in a golden age of cinema in the 1970s. The FCC allowed a startup called MCI to begin offering long-distance service using microwave radio technology, the first step in a process of deregulation that culminated with the 1984 breakup of AT&T. And the Nixon administration repealed regulations that had limited the growth of cable television, creating a platform that would eventually provide robust competition for broadcast television networks.

If Wu’s theory of “the cycle” is correct, these trends toward openness in the 1960s and 1970s should have been followed by contrary trends in recent decades. But Wu struggles to come up with examples of industries that have become more closed since 1980. The two examples he does mention aren’t very convincing.

Wu argues that the consolidation of the Baby Bells marked a turn toward a closed telephone market. Yet this reading ignores the broader trends in that industry. The average American household in the late 1980s—after the breakup—still had only one choice for local telephone service. In the 1990s, cable companies entered the telephone market and cell phones became affordable and ubiquitous enough to offer a serious alternative to a land line. And since the turn of the century, a variety of VoIP providers, including Skype and Vonage, have given consumers still more choices. The telephone industry is clearly more competitive today than at any time in the 20th century.

Posted in Uncategorized | 1 Comment

When is a Tax Not a Tax?

Megan McArdle and I have been having an interesting discussion in the comments to my last ObamaCare post. She’s convinced me that the ObamaCare individual mandate is structured in a way that would be difficult to actually duplicate within the structure of the existing tax code. Many taxpayers pay no income tax, so if Congress simply created a health insurance tax credit (and raised rates or reduced the standard deduction to make it revenue-neutral) it wouldn’t be creating any incentive for the lowest-income (non-)taxpayers to get health insurance. The ObamaCare mandate deals with this problem by creating a brand new quasi-income tax with a weird structure. As Ezra Klein describes it: “In 2016, the first year the fine is fully in place, it will be $695 a year or 2.5 percent of income, whichever is higher.”

I’m not a constitutional lawyer, but it doesn’t seem like it would be crazy for the court to hold that the minimum liability provision makes this a sham income tax that exceeds Congress’s taxing powers. I wouldn’t be upset to see the court say that. But it would be a very narrow holding. Congress could easily respond by creating a new tax that’s 50 percent of income below $1390, 0 percent on income between $1390 and $27,800, and 2.5 percent of income above $27,800. This is indisputably a tax on income and it’s mathematically identical for everyone who makes more than $1390.

But I don’t think this is what people are talking about when they say that the mandate is unconstitutional. I think they have something much broader in mind: that Congress shouldn’t be using the tax code to force people to do stuff they wouldn’t otherwise do and buy products they wouldn’t otherwise buy. But if so, then the courts have two options: One is to bite the bullet and invalidate the child tax credit, energy efficiency tax credits, college tuition tax credits, and so forth. Or two, they need a story about why coercing people to buy health insurance is more objectionable than coercing them to have children, pay tuition, take out a mortgage, or install solar panels on their house. Personally, I’d be happy to see the US tax code ruled unconstitutional. But I think it’s safe to say that the courts aren’t going to do that. And I have trouble imagining a principled argument for invalidating tax incentives to buy health insurance without invalidating a bunch of other tax credits that have long been regarded as constitutionally sound.

Posted in Uncategorized | 11 Comments