On this week’s episode of the EconTalk podcast, Russ Roberts asked Robin Hanson on the show to discuss his theory of the technological singularity. In a nutshell, Hanson believes that in the next few decades, humans will develop the technologies necessary to scan and “port” the human brain to computer hardware, creating a world in which you can create a new simulated copy of yourself for the cost of a new computer. He argues, plausibly, that if this were to occur it would have massive effects on the world economy, dramatically increasing economic growth rates.
But the prediction isn’t remotely plausible. There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.
First a quick note on terminology. Hanson talks about “porting” the human brain, but he’s not using the term correctly. Porting is the process of taking software designed for one platform (say Windows) and modifying it to work with another (say Mac OS X). You can only port software you understand in some detail. The word Hanson is looking for is emulation. That’s the process of creating a “virtual machine” running inside another (usually physical) machine. There are, for example, popular video game emulators that allow you to play old console games on your new computer. The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What he means is that we’d emulate the human brain on a digital computer.
But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.
Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.
Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.
This is excellent: something I’ve wanted to put into words for some time but now you have so I don’t need to do so.
If the “human brains simulated in silicon” project could ever be completed, I’ll speculate it would have to be based on some sort of continuously-tuned expert-system that gets progressively better at creating simulations of biological systems, and then only long after its methods have progressed beyond the ken of its human designers. (Humans would still have to be involved in curating the fitness measures that keep the system going in the right direction, but presumably that’s an easier task.) That presupposes many giant leaps in AI, of the sort we’d be silly to expect in a century, let alone mere decades. Even then, the expert system would only be capable of growing new brains in virtual vats, not of precisely copying an existing biological brain. In order to do that, I suppose we’re going to have to learn things about the universe that our current physics can’t even contemplate.
It’s my impression that many of these Singularity enthusiasts see a solution in cryogenic brain preservation, until such time that all these problems have been solved. That is folly, much more obvious than the porting/emulation/simulation confusion. Within two generations at most, no one is going to care when grandpa’s brain fridge gets unplugged. You cannot build a human organization which after the death of your children will be devoted to any other end than its own preservation, assuming it lasts even that long. (Some might object that religions are an exception to this rule, if they haven’t spent much time considering religions.) Even if we stipulate vast efficiencies in biological storage technology, if and when the “port” takes place it won’t be a cheap operation. Who in the future will care to port all these old brains? Sure one or two might get ported for curiosity’s sake, but the preferences of the future are liable to be very different than our own, and they might be more interested in “porting” Neanderthals or dolphins than early-to-mid-21-century-CE science fiction buffs.
Excellent post. One of the big issues I’ve had with the Singularitarians over the years is that they tend to have two assumptions:
A)It’s possible to more or less perfectly simulate the human brain, creating a digital copy of your human brain (which more or less makes the assumption that the human mind is some type of programming running on a biological substrate, instead of being intrinsically linked to the biological processes that create our minds).
B) That perfect brain simulation will then somehow develop the ability to improve and refine itself ad infinitem, etc.
Both seemed to be on pretty shaky grounds, and your post outlined a major reason why – no simulation is perfect, and a simulation of a human mind that isn’t close to perfect is going to be something else, some other form of intelligence (assuming it even forms some form of intelligence).
I have had a feeling for a long time now that there is an analogous principle to Godel’s answer to Hilbert’s conjecture at work here, that humans are not capable of self-describing. But that is just a feeling.
Another problem is that I think those who think like Hanson fail to consider that the body parts involved in conciousness are not limited to the brain. The nervous system throughout the body, feedback systems in various glands, etc. and seemingly even skin are all involved. It may well be that even if you could emulate the “hardware”, without the surrounding environment, you still wouldn’t have a person.
Oh, next you’ll be saying there will never be warp drives. Do you want to make them cry?
Seriously, before talking about putting a human intelligence into a machine, isn’t it first necessary to demonstrate that the machine is capable of the functions required for intelligence? It would be a pretty poor repository if it couldn’t.
Has anyone demonstrated a machine capable of facial recognition that works nearly as well as a human? Has anyone demonstrated a machine with generic problem solving abilities? With the ability to learn how to learn?
Until that’s done, one might as well put one’s brain in a bucket.
This sounds about right to me, although I’m a bit disappointed by the lack of any reference that that episode of Futurama where Fry downloads Lucy Liu into a blank robot to be his girlfriend.
You’re what I think is an important error in your analogy with weather forecasting; don’t know how important it is for brain modeling. In weather modeling the issue is not so much the complexity (number of things we need to include) as the inherently chaotic nature of the non-linear processes that govern motions in a fluid (the atmosphere, in this case). Our current understanding is that the atmosphere is inherently unpredictable: that is, that the information about what the weather will be in a few weeks does not currently exist in nature. On the other hand, our weather models are pretty good simulators of the atmosphere, and thus they have this same property. So, for example, the way the weather service arrives at a conclusion that there’s a 30% chance of snow in Peoria on Monday is that they’ve run their big model ten times with ten slightly different initial conditions (all consistent with observational uncertainty), and found that in three of them, it snows over Peoria on Monday. There are a couple of important elements of this story that might be relevant for brain simulation. First, forecasting gets worse over time: we can predict what the system will do tomorrow very well, but have less skill for the day after and essentially no skill in detail for next month. Second, longer term forecasting, essentially the forecasting of climate variations, depends on the existence of slower, predictable elements of the climate system. We can make seasonal forecasts with some skill, for example a forecast that the next three months are more likely than not to be warmer than normal over northeastern Canada, because ocean temperatures are more persistent than atmospheric temperature, and certain distributions of ocean temperatures differing from the long term mean (e.g. El Nino) have statistical associations with particular climate anomalies over other parts of the world. Another example is climate change: increasing CO2 causes over-all warming in climate models in patterns that agree well with what we’ve observed over the past 30 years.
So to apply this to the brain: given that nerve-nerve interactions have some non-linearity to them (for example, any kind of switching behavior), it’s unlikely that we’d be able to predict what any given brain is likely to do in any situation where there’s any doubt (weather forecasting is intrinsically limited). But that doesn’t mean we couldn’t simulate a brain whose behavior appears brain-like (it’s possible to make a model of the atmosphere whose climate agrees well with the real atmosphere).
Hey, really enjoyed the post. You’re a good writer.
Bethany
I want to address your arguments only, because there’s a fundamental error here.
“… simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall …”
This begs the question of how good an approximation needs to be to get an answer that is indistinguishable from the correct answer: how small an epsilon is necessary for a simulation to be good enough? True, we perhaps can never predict where each raindrop will fall, but this doesn’t preclude our ability to simulate a storm well enough that its indistinguishable from the storm. Could you even measure where each raindrop falls to do the check? More realistically, we’d only want to simulate a storm to get the right distribution of raindrops anyway.
You say: “There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software.” I do not think you mean “functionally equivalent”, because your argument – even if correct – wouldn’t support that point. You mean something like “functionally isomorphic”. I would think that its impossible to argue that its impossible to implement brain function in software that is functionally indistinguishable from that of a living brain: for that, you can’t hide the brain’s specialness in the tiny epsilon of error in the simulation without also proving that every epsilon of difference is significant. Moreover, “functionally equivalent” does not require simulation. Indeed, why is equivalent even interesting? Wouldn’t we prefer “functionally superior”? Computers improve on the brain in game playing, for example: Othello, chess, go, Jeopardy.
Even you admit that errors are often negligible: “… emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.” But this is not really true. I can emulate a 1980’s arcade processor well enough to run all its software, but I still haven’t captured analog characteristics of the machine: What happens when this diode fails or when the system gets too hot? Have I truly emulated timing so precisely that the systems behave identically? In chip design nowadays, enormous computing effort is put into simulating a designed chip before it is ever fabricated; but that simulation is never exact. It simply can not be. The fact that we can not precisely and completely simulate an x86 CPU before its fabricated doesn’t mean that we can’t adequately simulate it, for most purposes. We can indeed produce a functionally equivalent model, even if we haven’t emulated it well enough to replicate all its failure modes.
You write: “… models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.”
You have no way to make this claim. Not all errors “snowball”. Small errors that snowball are bad; the indicate that the simulation is poor. But small errors are often so small that they don’t matter. We do wonderfully accurate simulations of physical devices, even though these calculations are fraught with error all the way down; the basic numerics, after all, is done in floating point, which has an inherent error. Whether this error matters or not, and how much it matters, is a fundamental question asked by those who do computational science and who study numerical methods. But to claim that they’re always essential, that they always “snowball” is just wrong.
“The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model.”
The limitations of your imagination are astonishing. Intel’s upcoming Sandy Bridge chips will have 624 million gates per chip. But, more than that, a neuron’s switching speed is on the order of 1-10 sec, i.e. .1 to 1 Hz. These chips operate at 3 GHz. That’s 3 to 30 billion times faster! Add to that the fact that the largest computers now in existence – using millions of CPUs – operate in the petaflop range – a thousand trillion operations per second – and that exaflop scales (one quintillion operations per second) are expected before 2020) …. How can you possibly look at these numbers and feel so confident in your doubts?
Moore’s law says that the number of transistors per chip doubles every two years. That’s held for 40 years. And that’s exponential growth. Draw yourself a graph of that curve.
@Dan: Just solve these bad boys and you can emulate instead of simulate:
http://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations
The emulate/simulate distinction could actually cut in favor of Hanson. What if, after some passage of time, the simulated me is predicitably more like the me that creates the simulation than the me that has evolved over that time? Or less? Some of us might potentially highly value that either way.
This would be a good argument if the purpose of computer brain simulations were to predict what the simulated human would do at a given time, similar to how the purpose of weather forecasting simulations is to predict what the weather will do at a given time.
But the real benefit to simulating a human brain would be having something running on silicon that is functionally indistinguishable from human intelligence in general (e.g. that can pass the Turing test), whether or not distinguishable from the particular human brain used. If the only goal of weather simulation were to create a weather pattern that looked plausible to its watchers, well, we’re already there.
Let me add one tiny example regarding error in simulations. There’s always numerical error in doing any numerical simulation of a physical system described by a Hamiltonian. But this simulation can always be done in a way so that, provably, the simulation is in fact an exact simulation of a slightly perturbed Hamiltonian (the “shadow Hamiltonian”). I.e. we can exactly simulate a system that’s slightly different than the one specified. Now of course there’s error in that perturbation. But there’s also error in measurement of the initial Hamiltonian anyway. And there’s error in any measurement of the two systems. What becomes important is only whether the error is large enough to matter … and in simulation, that generally brings us into the realm of Statistical Mechanics. In practical terms, this often means that the quantity of data that you’d need to detect meaningful differences in behavior – which are now distributions, statistical quantities – is enormous.
It seems so wrongheaded to try to find the specialness of human consciousness in the miniscule errors of biological devices.
FYI Timothy, you have a typo in this parapgraph:
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently MEANS MEANS making judgment calls about which aspects of a physical system are the most important.
FYI Verity, you have a typo in this sentence: FYI Timothy, you have a typo in this paraPgraph:
Bwahahahahahaha! : ) Is that egg on my face?
Popped over from Sullivan’s blog – interesting read, but this post sounds as if it comes backed with a great deal of faith in the uniqueness of the human brain as compared to the wiring of the rest of the physical world. It also seems somewhat polluted by uncritical thinking of another sort, as well, and words like ‘design’ set off all kinds of buzzers and alarms.
The distinction the author makes between ’emulation’ and ‘porting’, as one example, seems to hinge on the idea that there’s something in the hardware of the human brain that is not rooted in the physical realm and cannot be recreated. What might that be?
This statement, conveniently, comes early in the post: “There’s no reason to think it will ever be possible…”
“Ever” is a long time. Mightn’t you have hedged your bets a bit, Mr. Lee?
Finally, I agree with one or two of the other commenters that the metaphorical distinction of emulation is somewhat unhelpful. Does it matter that our weather models only get closer to predicting reality if they save lives and make billions of lives better? Does it matter if we can’t truly ‘upload’ a mind to a computer if we can upload some portion of that mind so that it can live on and contribute to a better world beyond the duration of its owner’s life? I’m sure these are complex issues (and lots of ethical questions are suddenly springing to mind), but….ever? Come, now.
@Troy: and words like ‘design’ set off all kinds of buzzers and alarms.
To be fair though, that’s a reference to the larger theme of Tim’s blog — “bottom-up” evolution versus “top-down” design. The distinction is one he’s spent a lot of time discussing.
Does it matter if we can’t truly ‘upload’ a mind to a computer if we can upload some portion of that mind so that it can live on and contribute to a better world beyond the duration of its owner’s life?
In the context of Tim’s response to Roberts it does matter. Roberts’ hypothesis, as summarized by Tim, was “a world in which you can create a new simulated copy of yourself for the cost of a new computer” emerging in the next few decades. Roberts posited an exact copy scenario to begin with, so it does make sense to discuss approximations or models versus true emulations. The questions you’re raising here are certainly interesting, but you seem to imply that they point out some shortcoming in Tim’s response, which I don’t believe is fair.
More reasons that the blog post is a FAIL:
A) Even if it were a PERFECT simulation, the artificial brain would diverge from the original the instant it received different perceptional input.
B) Speaking of “snowballing error”: The brain has evolved to deal with INCREDIBLY noisy input; Nobody has ever seen a dog exact same way twice, even if you’re looking at a photograph of a dog… (The position/lighting of the photograph relative to your eyeballs will differ from moment to moment, photon to photon.) Yet the brain easily classifies dogs as dogs… The brain’s whole job is to transform noisy input into stable representations for further processing.
C) Repeat of previous point: The goal of making an artificial brain would be to approximate and improve the functionality of the “real” thing, not duplicate it so exactly that it could be trusted to make the exact same decisions as the original; But even if it was, or the artificial brain could*: What would that say about free will?
fpg
*It couldn’t anyways, for the same reason we can’t and never will be able to perfectly predict the weather…
@Rhayader: The design point still troubles me a bit, though, because Mr. Lee seems to be trading one dichotomy for another. Humans are part of the same system as the systems they design, a part of the same complexity. To think that we will never be able to achieve the same complexity as that of the natural world under our own power is to imply a break between ourselves and the natural world that doesn’t exist.
The natural world is us.
We are it.
The universe is bigger and stranger and more complicated than we think,
but we figure it out, bit by bit.
My criticism of Mr. Lee’s post isn’t that he disagrees with Roberts’s (borrowed, I’m assuming) hypothesis that we will be able to simulate the human brain within a few decades. Mr. Lee dismisses it as a possibility in the near future, the distant future, or ever. He has either made what I view as an indefensible assertion (x will never happen) because of a deep-seated conviction, or simply used careless language. I suspect the former. Perhaps it is a good conviction. I don’t know.
Hi Tim,
“The word Hanson is looking for is emulation.”
Thus the “Whole Brain Emulation Roadmap” by neuroscientist Anders Sandberg at Oxford (summarizing the results of a conference of neuroscience folk working in brain emulation):
http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf
If you want to engage with the clearest presentation and arguments on this topic I would go to that report, which offers much more detail and addresses more issues than than Robin’s time-limited podcast possibly could.
> First a quick note on terminology. Hanson talks about “porting” the human brain, but he’s not using the term correctly. Porting is the process of taking software designed for one platform (say Windows) and modifying it to work with another (say Mac OS X). You can only port software you understand in some detail. The word Hanson is looking for is emulation. That’s the process of creating a “virtual machine” running inside another (usually physical) machine. There are, for example, popular video game emulators that allow you to play old console games on your new computer. The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What he means is that we’d emulate the human brain on a digital computer.
No, porting is a perfectly fine word. Porting is a superset of emulation (often porting is done by an emulator), and embraces other techniques to run the same algorithm on another computer. For example, one could rewrite much of the source; and this is in fact Hans Moravec’s proposal using nanotechnology (have a nanoprobe measure a neuron or glia’s activity until its thresholds and other attributes have been established, and then replace it with a little digital nanobot-neuron). This would create an upload without emulating neurons.
As for your apparent philosophical argument – I feel like I’m back in the 1980s. ‘Chaos makes everything magically delicious!’
The question rests on whether or not the detailed analog state of the brain is important, say for the stability of the process. We just don’t know. Superficially, it kind of looks like a digital process that could be subject to emulation, but there’s a lot of guesswork there. If the detailed analog state is important, we might never be able to emulate a brain in something smaller than a brain.
A stronger argument against emulation is that a disembodied brain won’t function without huge modifications or a huge environmental simulation. Those modifications will require a lot more understanding than we have now, not just a good neuron model and a way to capture the whole-brain-state.
And things like consciousness strongly suggest there’s weird physics going on.
I think Tim’s post conflates a number of different issues. First, there is non-linearity/chaotic behavior. Weather is difficult to predict for a number of reasons, but foremost of those is that very minute changes in initial conditions lead to certain kinds of large differences in the final answer.
Note however that the differences are constrained, though. A butterfly beating its wings in Montana might change the track of a storm in China, but it won’t change the climate.
So if the brain exhibits this type of behavior, it’s not necessarily a big deal. If it is that sensitive to initial conditions, that means that it’s happening all the time anyway. For example, if we talk on a cell phone, that might cause some minuscule heating that changes initial conditions ever so slightly. Or some neuron has mutation and dies.
The effect of this on a brain simulation would be that we couldn’t upload a brain and predict exactly what thought that person will have 10 years from now. But we could simulate that person for most intents and purposes (assuming adequate computational power).
The second issue is whether or not we will be able to model a brain to sufficient detail. It seems that this will be possible some day, though this is outside of my area.
Another issue is computational tractability. It may be possible to model a brain (and neurons) in sufficient detail, but the amount of computation required might be prohibitive. Note that the amount of computation required will depend on the model and on the overall behavior of the resulting dynamic system.
We have no reason to think that we can simulate anything to a high-enough degree of complexity in real time, faster than it takes the actual physical event to take place.
Thus, you can predict where a cannonball will land more quickly than it takes the cannonball to land there, only because you leave out an enormous amount of data–data that, in the case of a cannonball landing, doesn’t make much of a difference. But we don’t know what to leave out of a brain simulation.
Even assuming we could get a brain simulation up and running–there’s no reason why a good-enough simulation isn’t possible in theory–it may not be possible to make it run faster than a physical brain just by running it on “faster hardware.”
It is possible that the physical universe is already running as fast as possible–without leaving out factors, we have good reason to suspect that we can’t simulate any event faster than it takes the event to happen.
I tend to find TJ’s reply persuasive. Ever is a long time. In the context of “ever” 100 billion neurons does not seem like much of an obstacle at all. Taken with TJ’s reply, I am not persuaded that the barrier to achieving brain emulation (or simulation) is insurmountable.
Surely we can agree that whole brain simulation is possible; if it is possible, then we’re just talking about what level of errors in the simulation prevent it from being “human,” or where consciousness lies.
John Searle argues that, while you could build an artificial brain, you can’t simply “simulate” consciousness, on the basis that consciousness is a physical process, not merely an information-processing activity. His analogy is that, while you can simulate fire, nothing actually burns. Building an artificial brain would be like building an artificial heart–a physical machine that does things–and not like running a simulation of a heart on a computer.
People who assume you could easily simulate a brain, therefore, are simply assuming that consciousness is an information-processing activity that is indifferent to what “hardware” it runs on.
Great post, I enjoyed reading.
I’m no expert in neuroscience but I am in computers, and the other day I was having a discussion about this topic with computer-minded friend. What we were debating was this:
There are, in a way, two parts to the brain: Its physical makeup and and a pattern of energy. If you were to stop time, remove the energy pattern, then start it again, would it “reboot?” Would it start back up and be pretty much what it was before? Or would consciousness or the mind be lost?
I think this is relevant to this post because even if we can model the physical brain it may be useless without the pattern of energy flowing through it at any given time.
(FWIW I think the brain will reboot and that once we can scan and emulate the brain we’ll have machine consciousness, but again I am no expert so it’s little more than a feeling. Or hope, even.)
> John Searle argues that, while you could build an artificial brain, you can’t simply
> “simulate” consciousness, on the basis that consciousness is a physical process, not
> merely an information-processing activity.
We don’t really know that. Maybe the simulation _is_ conscious.
> 100 billion neurons does not seem like much of an obstacle at all.
Yeah, but 10^27 or so atoms does seem like an obstacle. The necessary level of approximation is important. What if there’s something subatomic going on?
Although I think the possibility of emulating brain activity is difficult, you make many assertions which are just not true.
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model.
This makes no sense – it’s not an argument.
It’s not digital vs. analog, it’s not natural vs. man-made. It’s all about complexity and measurement.
Brains are phenomenally complex. We know a fair bit about what cortical columns do, but next to nothing about things like “attention”
I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.
Well, that’s an opinion. I mostly agree, I tend to doubt that emulation -> simulation will take place s.t. we can “scan” a person and then have something run that is (or is indistinguishable from) that person. But I bet that we will be able to simulate so that the simulation appears to be *a* person.
Let us continue to create labor-saving devices, which promise to continue contributing to our prosperity. But why try to make devices that copy the exact ensemble of (valuable) functions found in a typical human brain (not to mention the typical suite of *defects*–*bugs*)? For the most part, more specialized devices are called for, and some of the functions they perform will be done *better* than any human brain could do them. Other functions may be very difficult to simulate; they may resist our simulative efforts, and in the end those efforts, if we made them, might turn out not to have been worth their cost.
But suppose we succeeded on all fronts: what would be the point of putting these functions–all performed at the typical human level, but no better–together in a single device: a human “sim”?
Prediction: the whole emulation thing is a red herring. We’ll slide into it via gradual augmentation.
There will never be an uploading, there will just be gradual replacement of parts over the next millennium or so. If we don’t like the incremental effect, say because the artificial part loses some of the important aspect of the physical part, we’ll back off until we better understand it.
@john:
January 14, 2011 at 13:14
Building an artificial brain would be like building an artificial heart–a physical machine that does things–and not like running a simulation of a heart on a computer. People who assume you could easily simulate a brain, therefore, are simply assuming that consciousness is an information-processing activity that is indifferent to what “hardware” it runs on.
You miss the role that perspective plays here. The simulation would be evaluated from the “outside” – i.e., based on how it is perceived by external actors – which is a matter of information-processing activity. Thus your point boils down to the difference between a simulation actually having consciousness and just being indistinguishable from things that (we think?) actually have consciousness. I suggest that it’s a philosophical point in the worst sense of that word, in that it’s irrelevant to the issue at hand. In fact, it’s the equivalent of my wondering if you are actually conscious, or just appear to be.
@Finch:
January 14, 2011 at 13:52
Yeah, but 10^27 or so atoms does seem like an obstacle. The necessary level of approximation is important. What if there’s something subatomic going on?
Remember, the simulator doesn’t have to be built out of Legos – it gets to take advantage of the same physics and scale as our brain. So what matters is the “material meta-efficiency” (I just made that term up) of the simulator – how much “material” (#particles / mass / volume) is required to simulate a unit of “material” in the original? Note how Moore’s law factors in here – it is exactly this exponential growth in processing power per unit of “material” that makes the singularity predictable, and its continuation implies that eventually our processing prowess would be taking advantage of whatever “subatomic” or other mechanisms are available to keep increasing this efficiency/density.
Great post. I agree that the singularians’ case isn’t nearly as rosy as they’d like to believe. I agree with others that weather may not be an ideal analogy, though, for a couple of reasons:
– First, measurement. It’s inconceivable that a weather system’s constituent particles could be measured in enough detail to determine their initial state. It’s at least somewhat conceivable that a full characterization of a simulated brain’s initial state could b captured (though almost certainly not from a living person).
– Second, the chaos theory aspects of your argument may or may not be relevant. We just don’t know. It appears that behavior is neuronally overdetermined in some ways: you’d consider me the same person tomorrow regardless of whether I kill off some brain cells with booze or heading a soccer ball this evening. And I don’t think anyone would expect a simulation to behave *exactly* like its source. They’d have subtly different stimuli, even if just standing on different sides of the lab. There’s some room for fudging while still plausibly claiming to have copied an individual and the mental activity associated with it. Our standards are somewhat different for meteorology.
But these are quibbles.
A neuron is complicated but perhaps the results of a neuron are deterministic. They may be like computer programs, which are also complicated but their results are predictable and repeatable. If neurons act like programs then accurate emulations might still be possible despite that, as you say, they are complicated. One clue as to whether neurons are deterministic may be found by asking what are the effects if neuron A and neuron B, which are neighbours serving the same part of the brain, were to swap positions. If the swap has no effect on the overall operation of the brain then their complexity may not be an insurmountable problem.
I agree with the overall thrust of the argument, but the weather metaphor seems to go on the wrong track. The goal of simulating the human brain isn’t to be able to predict how an actual brain would behave, it’s to substitute an actual human brain. As such, a little bit of error around the edges is okay as long as it more or less behaves the human brains do, exhibiting intelligence and such. Weather simulations for the most part accomplish this level of accuracy. I would suspect that weather predicting software could pass a “Turing Test” where actual historical weather data was compared to simulated weather data.
But yes, simulating the brain would be quite a bit trickier.
Here is a myth that could become a conjecture.
Imagine an augmented biological brain. This starts out by admitting that like human minds in human brains which depend on a reptile limbic system, the augmenting machine treats biology as I/O – somatic means for interaction. This augmentarian project learns over time to upload *behavioral observations* of me. When this present brain of mine reaches 99 (or some optimum, empirical maturity) my cloned phenotype runs in learning mode. I am not a blank slate but instead comprise heterophenomological aggregations of virtual machines (this may be the mythiest part). While I tan on the beaches of Venus by radio link; my mature brain finishes the necessary training, my very young brain is in the pipeline. Viola! Singularity win. Feel free to snark at will.
It surprises me that a highly educated writer in the technology sphere wouldn’t grant Hanson’s thesis at least some scraps of plausibility. Outright denial based upon a rather weak weather forecasting analogy and an unclear distinction between emulation and simulation seems to be one of the weakest possible means to reject Hanson’s claims.
I think you’re definitely right that “replication of the human brain in a digital form that we would feel comfortable uploading ourselves to within a few decades” is something that is not going to happen.
And I think you are also right that true replication may never happen — or, happen so far off that we don’t really have any way to make theories about it.
But I think at some point we might get something, not necessarily entirely digital — it might be that to house a consciousness like our own, that we will need something organic, something grown in a similar fashion to our real meat brains — and of course it will be a process. The first models won’t be very good, and there will be problems that probably result in people’s “death.” I agree with Hanson that at some point something like this will be possible. But I agree with you on the extreme difficulties, that in my opinion, push the probable date of this occurring out so far that it’s hard to make predictions about.
Honestly, I think we’ll create digital artificial life (whatever that means, but it won’t be an android that talks to us in english and has “feelings”, sorry every sci fi ever…) before we recreate the product of 4 billion years of evolution in a digital format.
Jeremy refutes a defender of Searle’s position: “Thus your point boils down to the difference between a simulation actually having consciousness and just being indistinguishable from things that (we think?) actually have consciousness. I suggest that it’s a philosophical point in the worst sense of that word, in that it’s irrelevant to the issue at hand. In fact, it’s the equivalent of my wondering if you are actually conscious, or just appear to be.”
There’s a part of this that has always nagged me. If we were able to scan my brain and upload it into some simulator – or perhaps even map it onto another brain – then that brain should have all the memories and thoughts and patterns of my mind. The simulacrum would think it was me: it merely woke up in some other place or some other body. But of course, I’m still here. So in a sense – probably a Searle-ean sense – “I” could never really be uploaded into a simulator or other brain. Whoever it is who wakes up in that simulacrum could be an excellent copy of me, but there’s still no continuity of my own consciousness, in some sense. The *copy* would believe that there’s such continuity, but I would know better …
Does this say that there’s something essentially different, something missing from the copy? I don’t know. I kind of think not. Maybe its the same as me going to sleep and waking up the next morning “the same person”. But I can see where there’s maybe an issue for philosophers to chew on here. (Though you’re in real trouble if you want to talk about spooky stuff like life after death. I mean, if everything about you can be read from the brain, then that’s almost absolute proof that everything about you that matters will die with your brain.)
`I think you’re definitely right that “replication of the human brain in a digital form that we would feel comfortable uploading ourselves to within a few decades” is something that is not going to happen.’
Or ever. Say I had a machine to replicate your brain and upload it into a perfect humanoid robot. Say I do this. I introduce you to your clone. You’re satisfied that its a perfect copy of you. May I now kill the original you? If you answer no, then you really don’t believe that “you” have been uploaded into the simulator. Its merely a copy of you at some point in time.
@Jeremy
You’re assuming that I (arguing as Searle) agree that a simulation of consciousness that runs on a computer is in fact possible–and that Searle’s position is that such a simulation, if it truly is conscious, would be a zombie. (I agree that arguing about zombies is useless.)
But I think Searle’s point is that there cannot be a “simulation” of a brain–only an artificial brain. If it comes to pass that we invent AI, and we get there by trying to emulate a brain, it’s not the emulation of the brain that is conscious–it is necessarily the physical computer. I take Searle’s point to be hyper-materialist: that consciousness is something that happens to matter.
Where I part ways with Searle is what the implications are of this–he thinks, for example, that the chinese room is not conscious, but I think that it must be–a real chinese room would be every bit as structurally complex as a human brain.
But I agree with him that the chinese room couldn’t be said to be an “emulation” of a brain, any more than an artificial heart is an emulation of a heart. Rather, it is a physical object that does some of the same things. In general, talk of “emulation” and “porting” leads to the thought that the brain must be software, which I think is wrong, and oddly dualist. (What if I wrote an AI program and compiled it, but never ran it–have I created consciousness? Obviously not: I have simply created a recipe by which an individual physical computer can become conscious.)
TJ Parker: Continuity of consciousness is probably not something we have in the first place. People sleep, people black out, people go into comas; consciousness is fraught with discontinuities of varying sizes, they’re just the sort of discontinuities that we have had plenty of time to get used to.
Which isn’t to say there isn’t a problem here. Certainly if I copied my brain into a computer and good old fashioned meat-me continued to exist, meat-me would get only fairly indirect satisfaction out of the “immortality” of computer-me. But that doesn’t mean that pre-procedure-me has any particular reason to favor post-procedure-meat-me to computer-me.
TJparker:
“Or ever. Say I had a machine to replicate your brain and upload it into a perfect humanoid robot. Say I do this. I introduce you to your clone. You’re satisfied that its a perfect copy of you. May I now kill the original you? If you answer no, then you really don’t believe that “you” have been uploaded into the simulator. Its merely a copy of you at some point in time.”
That’s exactly one of the things that Hanson talks about, if this tech became real. And given we seem to recognize that “software only people” are possible within human brains (split personality disorder), we would indeed need to address how that would be treated. I’d say that both the copy and the original are distinct individuals with rights, since a perfect copy of a person with rights would necessarily include those rights. He’s got a post about it, somewhere on his site.
I think that the word that is missing from this article, throughout, is “yet.” As in “you can’t emulate a natural system because natural systems do not have designers…” should read, “you can’t emulate a natural system YET….”
This much is certain — whatever humanity has dreamed of has eventually been achieved… or will be… provided it does not violate the laws of physics etc. If you argue this from that standpoint — that it cannot be achieved because it violates natural laws — then sure, I would consider it impossible. As it is, you have simply said that it is going to be really difficult and therefore unlikely.
I would also say that the rest is a philosophical argument… what is identity, what is the “I” ? Those that argue in comments that their identity is not maintained through a copy even though memory patterns etc. are perfectly maintained do not acknowledge that their physical selves are constantly being morphed and replenished with new raw materials. What remains is the ‘pattern’ of the self.
I for one would have no problem killing off my original if my copy was perfect.
In playing down the possibility of the Singularity Timothy is making a common mistake, so it is worth repeating why in evolutionary terms the likelihood that humans will be able to maintain intelligence superiority for an extended time in the face of extremely rapid growth in AI is naïve. Timothy’s core mistake is taking a narrow view of the issue that focuses on one proposed means of achieving the Singularity — he is like a horse with its blinders on, and does not see the bigger picture.
There is no doubt that self replicating machines able to generate intelligent, cognition under normal surface planetary conditions can exist, there are about 7 billion such units currently in operation (this differs from fusion power that requires such extreme temperatures and pressures that it may never be practical to commercially produce here on earth). It was not especially hard to develop these systems because they were not purposefully designed constructed, they arose without even trying via mindless bioevolution. Because bioevolution is not cognitive and intentional the process took a very long time, and produced sloppy and limited results. There is no reason to believe that bioevolution came anywhere near to close to producing the ultimate thinking intelligence — most of the devices are hard pressed to remember telephone numbers, hardly any actually understand relativity theory.
The one thing that would completely bar the ability to produce artificial minds at least as capable as those of humans would be if the later minds are supernatural — much of the opposition to the Singularity comes from the religious community which still believes in the ghost in the machine, which is no more plausible than the ghost in the haunted house. At this time the highest level intelligence on the planet is being generated by meat – brains are edible – and making the reasonable assumption that meat based information processing machines operate within the laws of physics, it is probable that non-meat based machines can generate a similar level of conscious intelligence. Even Penrose and Hameroff acknowledge that if their radical belief that consciousness is the product of quantum effects is correct then it should be possible to construct devices that do the same thing.
Because technoevolution is directed by intelligence with a purpose, and because scitech information is well stored and builds upon itself, it happens millions of times faster than bioevolution. So the basic ability to process information is doubling every couple of years while understanding how brains work is gaining rapidly as the power of computers used to investigate brains soars. And there are no natural stops limiting the ultimate power of artificial mind machines. The notions that the sophistication and performance of artificial intelligence generating machines will perpetually remain below the pathetic human brain that has been flat-line since the Pleistocene, and that thinking humans cannot do what dumb-ass evolution managed to do, are, well, rather dumb. Short of a general collapse of technocivilization the real questions are when it will be done, and how.
I would not be at all surprised if Timothy is correct that it is not possible to use digital computers to produce conscious intelligence by simulating brain function. So what? Timothy’s argument parallels how someone in the mid 1800s could underplay the practicality of man powered flight in the next century because steam engines would never be able to generate enough power for their weight. Building flying machines in the late 1800s was very difficult because the base technologies, especially power generation and aerodynamic control systems, were not on hand. The flight project appeared so intractable that at the turn of the century only one poorly funded and ineptly directed government flight program was underway in the entire world, and the only effective project was run by a couple of small town bicycle makers. My grandmother was born in the frontier town before the Wright’s flew, and as a child would have taken wild speculations of flying across continents and oceans in comfort as silly. She lived to fly to Europe. Likewise the structure of DNA was not known after WW II, now it is the subject of high school experiments.
Currently the base technologies for developing conscious minds are yet so poorly developed that the problem seems intractable to many. What is surprising and not a little disturbing is how folks should know better fall for this illusion. In a few decades information processors more powerful than brains will be cheap commercial devices. Brain function will become well understood. At some point the general technological base will become so sophisticated that that it should not be all that hard to produce devices capable of generating consciousness as well as and then better than the meat between our ears. My guess is it will be mass parallel neural networking analog-digital with self evolving systems very different from what I am processing this text on. Or may be not. Does not matter. Out of a global population of nearly 10 billion it is hardly likely that someone somewhere won’t do it; for the science and engineering beauty of he idea, for the commercial gain, and to save their own lives in the hope of uploading their minds into the immortal machines.
To be blunt, we don’t know enough in these still primitive times to discredit the singularity, or its occurance in the near future. It is not an absolute certainty, but the cyberrevolution is a strong hypothesis. Those who wave off the possibility are setting themselves up for being suckers. Evolution is all about enormous revolutions that radically transform the paradigm. They happen all the time. Just a fraction of a percent of earth history ago there was no high level intelligence on the planet. You guys are not actually so gullible as to think that humans are really going to remain top dog intelligence wise as the capability of unnatural information processors grows by leaps and bounds, are you?
Gregory, thanks for refuting an argument I didn’t make.
Brian Moore: Split personalities are not at all a generally recognized thing. Multiple personality disorder is a fairly controversial diagnosis. And even if it is an actual thing, that doesn’t mean that split personalities are a “software only” entity. Software makes sense from the framework of Von Neumann machines. You have instructions loaded into RAM and the computer follows those instructions. Brains don’t work that way, and as such don’t have a clean-cut line.
I responded here.
The whole idea of “The Singularity” is pure nonsense and evidence against it is gathering steadily.
As I see it, for brain simulation to be impossible forever, one of two things would have to be true:
1) Brains would have to be too complicated for us to ever understand. This seems to be based on an almost-religious reverence for the brain. Yeah, brains are complex, but they are not infinitely complex. We’ve figured out a lot in the past couple thousand years that once seemed ineffable. Besides, it’s impossible to prove that we can’t understand something.
2) We would have to reach an impassable limit of computer capacity. This limit can’t be low enough to prevent brains from existing, since we already have brains. Granted, we use some interesting hardware, but our existence proves that systems at least as complicated as our brains can be created.
Now, simulating a specific brain is more complicated than simulating a brain in general. But your argument that simulating is impossible is based on the idea that we can never fully understand and model the brain, which I think is a flawed foundation.
That is how I see it, at least.
This is all true and it’s an admirable critique but the real reason Hanson’s vision won’t come to pass is much simpler: it’s incoherent. A simulation is no more the thing it simulates than a drawing is the thing it depicts. A simulation of my brain running on a computer would be no more capable of consciousness than would a photograph of my brain. Advocates of brain uploading like to argue that the simulation would be running in real-time and would be of much greater detail but it’s simple to extend the photograph analogy in the same direction: I can imagine a large box containing many videos of my neurons, or whatever.
The point is that models and simulations are representations of things and not the things themselves. This involves no appeal to Cartesian dualism. One doesn’t have to suppose there’s “something more” to the brain in order to simply point out that a simulation is more like a description of an object than like an object. It’s often argued that this sort of thing doesn’t matter, because to the simulated brain the simulated experiences would be indistinguishable from real experiences. This ignores the fact that the simulated experiences wouldn’t be the sort of thing that could be indistinguishable from anything because they wouldn’t have any reality to begin with.
There’s also a misunderstanding of what ‘accuracy’ is in a simulation or model of a natural system. There’s really no sense in which a simulation or model could be ‘complete’ or ‘accurate’ enough to ‘capture the essence’ of what’s being modelled. It’s like asking if I have a complete description of a red balloon. If I simply say “red balloon,” is that incomplete? If I give its dimensions, note any unusual marks, describe it’s shape, describe the particular shade of red, etc, am I giving a more accurate and complete description? Is there a sentence I can use to describe a red balloon that captures the essence of that red ballon? If I utter such a sentence will we have to say there are now two red balloons because I have completely described the red balloon? Applying concepts of accuracy and completeness here, without first giving a purpose for the description, is simply incoherent. In some contexts “red balloon” will be perfectly adequate. The same is true of scientific models and simulations; they’re accurate enough for a given purpose.