On this week’s episode of the EconTalk podcast, Russ Roberts asked Robin Hanson on the show to discuss his theory of the technological singularity. In a nutshell, Hanson believes that in the next few decades, humans will develop the technologies necessary to scan and “port” the human brain to computer hardware, creating a world in which you can create a new simulated copy of yourself for the cost of a new computer. He argues, plausibly, that if this were to occur it would have massive effects on the world economy, dramatically increasing economic growth rates.
But the prediction isn’t remotely plausible. There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.
First a quick note on terminology. Hanson talks about “porting” the human brain, but he’s not using the term correctly. Porting is the process of taking software designed for one platform (say Windows) and modifying it to work with another (say Mac OS X). You can only port software you understand in some detail. The word Hanson is looking for is emulation. That’s the process of creating a “virtual machine” running inside another (usually physical) machine. There are, for example, popular video game emulators that allow you to play old console games on your new computer. The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What he means is that we’d emulate the human brain on a digital computer.
But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.
Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.
Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.
I have always found that people who deny the singularity turn out to not know what it is. No single event defines the singularity. Its merely the point in history where the idea of technological progress becomes meaningless.
This article is about nothing. I read it, shook my head in disbelief and read it again. Imagine reading an article about the invention of steam engines that constantly mentions Phlogiston. It’s ridiculous.
No the Singularity is not by default about emulating human minds. In fact the futurists who write about this range of future scenarios have a range of singularitarian variants of which the majority emphasizes machine intelligence and – shortly afterwards – the total extinction of humanity.
But even if we are to discuss human mind emulation we could easily envision or even discuss variants of mind emulation that are not verbatim copies of a brain – it may be possible to program personality simulations that emulate a specific person very precisely – I might in 2030 have a dozen copies of myself that act as disembodied butlers or agents or lawyers, and that represent me and my estate. I may then proceed to retire in an “out of the way place”, die, and for decades my death may be obscured since my AI representations may seamlessly continue representing me, going as far as to blog, appear in holographic debates, close contracts or meet with friends or loved ones (and very plausibly have sex as well) – At some point in the future a judge may decide no human “can operate without these A.I. representations” in much the same manner as right now you aren’t allowed to drive a car without having a licence. Soon after a judge may [judge person+AI estate] legally equivalent to [AI estate] – especially if the AI estate owns a massive corporation and a lengthy inheritance procedure would cause a sharp drop in these corporation shares.
A Singularity may happen where humans simply decide to let their flesh die. The interfaced hybrid entity of dozens of AI servitors and emulations may collectively decide the meat “makes to sense to keep around”. and the human might very well agree with that assessment. If all relevant memories, emotions and personality traits would be retained (insofar many of those would be relevant at any rate..) in a flock of beautiful emulations, the flesh would feel acutely sub-adequate if it thinks at a speed of tens to hundreds times as slow as his or her emulations.
Would such a Singularity where humans simply discard their outdated flesh in favor of the construct they regard as “self'” a macabre “bait&switch”? Can’t wait to find out.
makes to sense == makes no sense
I think it is something of an exaggeration to say there is ‘no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software’. We can, in fact, identify several trends that would converge on whole brain emulations if progress in these areas continue. One of these trends is the development of tools that enable us to examine how brains are put together and the way information is being processed. Another trend is the increasing power of our computers. But, perhaps the most significant trend results from a combination of the previous two. For instance, neurobiologist Joe Tsien is collaborating with computer engineers to reverse-engineer the brain: “we and other computer engineers are beginning to apply what we have learned about the organization of the brain’s memory system to the design of an entirely new generation of intelligent computers and network-centric systems…Someday intelligent computers and machines equipped with sophisticated sensors and with a logical architecture similar to the categorical, hierarchical organization of memory-coding units in the hippocampus might do more than imitate, and perhaps even exceed our human ability to handle complex cognitive tasks…For me, our discoveries raise many interesting–and unnerving–philosophical possibilities. If all our memories, emotions, knowledge and imagination can be translated into 1s and 0s, who knows what that would mean for who we are and how we will operate in the future. Could it be that 5,000 years from now, we will be able to download our minds into computers, travel to distant worlds and live forever in the network?”.
Another example would be the team lead by Ted Berger. They succeeded in reverse-engineering a section of the hippocampus and designed a functionally-equivilent microchip. They can remove the original, biological section, integrate the chip and functionality is restored.
We may well have a long way to go before we succeed in creating an artificial brain complex enough to produce my conscious mind. But it cannot be impossible in principle because my brain produces my consciousness and it does not violate any physical laws in doing so. It is doing nothing supernatural. The objection ‘computers are not like brains’ is a red herring because the computational theory of mind never said brains and computers are the same, it merely states that information processing is the fundamental activity of the brain (and of computers). The gap between the two is wide, but it can be narrowed. As Joe Tsien and other serious scientists are showing, there is no reason why we cannot in principle design new generations of ‘computers’ that more closely resemble the organization and operations of the brain.
Can they get close enough? How close is close enough? I do not know the answer, but to call mind uploading impossible is premature. There is no natural law that says it is impossible like there is with perpetual motion. Indeed, if it were impossible the human brain would never have evolved in the first place. It is only our own ignorance with regards to how the brain produces the conscious mind that prevents us from doing whole brain emulations that generate my ‘self’.
http://hplusmagazine.com/articles/ai/singularity-101-vernor-vinge
Indeed a simulated brain would come to diverge from the original with time, like a weather simulation. But I’m not convinced that is very significant when thinking about uploading. If you are trying to decide whether to upload or not, I think the interesting question is whether the simulation preserves enough characteristics of you that you are willing to call it “you”. I don’t really care if it decides to have cornflakes or porridge for breakfast three months hence.