Robin Hanson responds to my last post about simulating/emulating the brain:
To emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.
This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.
This response confuses me because Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally.
The example of artificial ears seems particularly inapt here because the ears perform a function—converting sound waves to electrical signals—that we’ve known how to do with electronics for more than a century. The way we make an artificial ear is not to “scan” a natural ear and “port” it to digital hardware, rather it’s to understand its high-level functional requirements and build a new device that achieves the same goal in a very different way from a natural ear. We had to know how to build an artificial “ear” (e.g. a microphone) from scratch before we could build a replacement for a natural ear.
This obviously gets us nowhere when we try to apply it to the brain. We don’t understand the brain’s functional specifications in any detail, nor do we know how to build an artificial system to perform any of these functions.
With that said, the fundamental conceptual mistake he makes here is the same one he made in his EconTalk interview: relying too heavily on an analogy between the brain and manmade electronic devices. A number of commenters on my last post thought was I defending the view that there was something special, maybe even supernatural, about the brain. Actually, I was making the opposite claim: that there’s something special about computers. Because computers are designed by human beings, they’re inherently amenable to simulation by other manmade computing devices. My claim isn’t that brains are uniquely hard to simulate, it’s that the brain is like lots of other natural systems that computers aren’t good at simulating.
People didn’t seem to like my weather example (largely for good reasons) so let’s talk about proteins. Biologists know a ton about proteins. They know how DNA works and have successfully sequenced the genomes of a number of organisms. They have a detailed understanding of how our cells build proteins from the sequences in our DNA. And they know a great deal about the physics and chemistry of how proteins “fold” into a variety of three-dimensional shapes that make them useful for a vast array of functions inside cells.
Yet despite all our knowledge, simulating the behavior of proteins continues to be really difficult. General protein folding is believed to be computationally intractible (NP-hard in computer science jargon), which means that if I give you an arbitrary sequence of amino acids even the most powerful computers are unlikely to be able to predict the shape of the folded protein within our lifetimes. And simulating how various proteins will interact with one another inside a cell is even more difficult—so difficult that biologists generally don’t even try. Instead, they rely on real-world observations of how proteins behave inside of actual cells and then perform a variety of statistical techniques to figure out which proteins are affecting one another.
My point here isn’t that we’d necessarily have to solve the protein-interaction problem before we can solve the brain-simulation problem—though that’s possible. Rather my point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. Even in cases where we know all of the components of a system (amino acid sequences in this case) and all the rules for how the interact (the chemistry of amino acids is fairly well understood), that doesn’t mean a computer can tell us what the system will do next. This is because, among other things, nature often does things in a “massively parallel” way that we simply don’t know how to simulate efficiently.
By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable. And by that I don’t just mean “on today’s hardware.” The problems computer scientists call “NP-hard” are often so complex that even many more decades of Moore’s Law wouldn’t allow us to solve them efficiently.
Emulating a computer doesn’t involve any of these problems because computers were designed by and for human engineers, and human engineers want systems that are easy to reason about. But evolution faces no such constraint. Natural selection is indifferent between a system that’s mathematically tractable and one that isn’t, and so its probable that evolution has produced human brains with at least some features that are not amenable to efficient simulation in silicon.
“…which means that if I give you an arbitrary sequence of amino acids even the most powerful computers are unlikely to be able to predict the shape of the folded protein within our lifetimes.”
This is not exactly right. NP-Hard means that if you give me the hardest sequence to solve, it is intractable. If you give me an arbitrary sequence, we need to talk about average-case complexity not worst-case complexity. Many problems that are NP-Hard have been proven to be efficient in the average-case while for others, there is strong empiric evidence. For example, computer scientists have had lots of luck building SAT-solvers.
The other footnote that should be made is that NP-Hard problems are intractable in the size of the problem. Are the problem instances large enough?
Both important caveats. Writing a general-interest blog requires me to gloss over some technical details. The existence of Folding@home suggests that for some real-world proteins, the relevantcalculations are extremely expensive but may not be impossible. I vaguely recall someone telling me that F@H uses a probabilistic method that finds possible (low-engergy) configurations but isn’t guaranteed to find the “correct” (lowest-energy) answer within any specific time limit. Either way, I think it’s a reasonable illustration of my point: simulating a comparatively simple involving a few hundred amino acids requires a staggering amount of computing power.
I respond here.
Let’s stipulate (even though it’s not the case) that we can abstract away the function of neurons, as if their instantaneous chemical and electrical output was a straightforward function (rather than a laborious simulation; I know, d.o.f., about which more below) of instantaneous chemical and electrical input (though the levels of various chemicals in a neuron’s vicinity encode an implicit “memory” of sorts), and simply investigate the structures into which neurons are organized, as if such structures were static (which they’re not). Hanson’s project still seems to require detailed knowledge of brain structure down to structures of very elementary function. Yet, of all the medical specialties, neurology is the least able to identify structure with function. Every neurological surgery is essentially an experiment, undertaken only in the direst of circumstances. The drugs we have at our disposal act in gross fashion on the entire organ, with little understanding of local effects. What reason do we have to believe that understanding of brain function will accelerate so much in the coming years, when historically this field has progressed more slowly than any other in medicine, while medicine itself is still more art than engineering? This project expects a drastic change in how brains are studied, and has nothing to say about how that will occur.
I can’t tell whether the “degrees of freedom” mentioned correspond to the physical concept, the statistical concept, some confusion of the two, or something else. I suppose this is meant to indicate that much brain function could be contingent and inessential to the project under discussion (effectively, that a mind could be “compressed”), but it is unsupported conjecture to say that it is possible to determine which behavior is desirable and then split that function out from everything else that brain structures do. One could simply state that a neuron is either on or off, or one might represent it as a real quantity in a range that may be profitably considered to have a 1,000-member partition, or that it may be described by a set of such quantities, or whatever. The former would be much easier to emulate than the latter ones, but how to determine which model is useful, since one may always make more precise measurements? (We’re able to make definite statements about the significance of measurements of well-understood physical systems with simple conceptual analogues since we already have those models, but we’re totally in the dark about complex systems. It was perhaps unfortunate that Timothy mentioned weather prediction, since that’s actually an endeavor for which precision isn’t really that important; I can guarantee that tomorrow’s high temp will be between -50 and +50 C, which isn’t too big a range. The mechanisms that maintain homeostasis in the brain will be hairier than those that keep the earth habitable.)
So yes, the principle of decoupling may be disputed, but as Timothy indicated the work may be intractable even in the presence of decoupling.
I’m sorry I haven’t had the time to figure out what an “economically-sufficient substitute” for a human worker might be, but on its face it appears to be a solution in search of a problem. How does this concept fit in with the activity we see that includes the Mechanical Turk and outsourcing of radiology diagnoses? If the work you need done can be digitized (it seems that digital emulations of brains would require this), that work is already on a steeply-decreasing cost trajectory. The history of AI research suggests that we’re better off searching for novel applications of existing tools than setting out to “solve” some predetermined problem, in some particular way.
While the protein-folding analogy is apt (protein folding is complex and ruled by many external factors like cellular salinity, location in the cell or cell membrane, etc), I do not think we can just call it NP-hard and hit “send” because the methods of solving a problem do not always stay the same. I hate to sound nostalgic and germane, but there was a time when we didn’t have graphing calculators, and sine, cosine, and tangents involved complex books. There was a time when engineers like me used slide rules, quite efficiently. But new methods, be they software-based or new hardware…they do arise, and they do exponentially increase the efficiency by which a problem can be solved.
While protein folding and brain simulation and weather simulation and hive-behavior are all incredibly complex things that quickly reach (exceed?) the limits of polynomial time…that they will never be solved faster unless Moore’s Law breaks down assumes that faster processing…not smarter processing methods…are the only way these problems can be solved.
Have you seen how humans using Folding@Home have actually been faster than their computer cohorts? If anything, this suggests that (like the bees who solved the Traveling Salesman problem) smarter strategies for reducing complexity will be the way we reach the Singularity…not just raw computational power.
Your confusing a lot of terms. What is your central claim?
– A human brain can never be simulated?
– A human brain is very hard to simulate?
NP-hard simple describes how much computational power it takes to solve a problem, NOT whether it is impossible or not.
Also, your confusing predicting a brain and simulating a brain. Let me explain: Suppose I take your brain and stick it in a jar with inputs that simulate your senses. I also scan your brain and run it on my computer with the exact same inputs. After a year I scan your brain in a jar again. If they match I’ve correctly predicted the state of your brain. If they don’t match, but both still function fine (assume some kind of Turing test), then I have simulated your brain.
Are you saying that I can’t even simulate a brain? Because if so you are making some extremely controversial statements about how the universe works. If you are saying I can’t predict a brain then I’m with you 100% percent. There are intrinsic random elements that probably cannot be known. In programmers terms: Same algorithm, different random number seed.
Then the question becomes: Does that matter? Am I a different person because of it? Am I a different person if I wake up 5 minutes late one morning, because the accumulated effects of that might make me a fundamentally different person a year latter.
NP-hard simple describes how much computational power it takes to solve a problem, NOT whether it is impossible or not.
But for large N, these things are equivalent, right? Moore’s law promises to give us a lot of computing power, but it will always be finite. And NP-hard problems can require a very, very large amount of computing power to solve. So “A human brain is very hard to simulate” might mean “simulating a human brain will require much more computing power than we’re likely to ever have available.”
It depends on whether we’re having a philosophical conversation or a practical one 🙂 Also, the growth of computational power per dollar (especially for problems that are particularly parallel in nature – which I image this one would be) has and will continue to be exponential for a very long time. As you well know, your phone has more computational power then the entire planet just 20 years ago. And that’s the market forces driving communication. I imagine the market forces for immortality would be substantially stronger.
Sorry, I meant 40 years ago, not 20 years ago.
Hold on, there. What makes you so confident that you can just “scan my brain”, and run an exact simulation of it on an electronic substrate? That was the point that Searle makes – that the human mind isn’t like some computer program running on a biological substrate, it’s intrinsically tied to that substrate.
“Rather my point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior.”
What’s with the constant focus on predicting how a brain-like system behaves? Understanding the principles underlying the function of a system does not mean it can be predicted, nor does predicting how a system functions imply that its principles of operation are understood. (A theory describing a system does make testable predictions, but that’s not the same as being able to accurately predict the system.)
I also think Mr. Lee is severely underestimating the state of knowledge re: how the brain works, specifically the neocortex, where all high-level processing is performed; VERY detailed knowledge of the function of individual cortical neurons of all types, detailed maps of the architecture and function of neocortical columns, and we’re making great strides in mapping out the local and global interconnections between columns and regions of the brain. There are also people like Jeff Hawkins and his co. Numenta who are thinking about how the neocortex works in principle, and are using the aforementioned biological research to corroborate and refine their ideas; His EARLY work yields a system that can perform visual recognition feats considered only possible with brain-like systems. On the computational front, there’s Moore’s Law, decades of research into neural networks, breakthroughs in materials science left and right (Memristors!?) and such, that promise hardware dense and capable enough to efficiently put these ideas into working systems.
All this tangible progress has happened in the last few years; the pieces are actually falling into place all around us!
It seems like the arguments against our being able to achieve “general AI” have not been keeping pace.
fpg
fastpathguru, I think you didn’t read my original post. I’m focused on prediction because Robin Hanson did so in the interview I was critiquing. He claimed that we didn’t need to understand how the brain works because we can scan and “port” a brain to a digital substrate.
I think that if we ever get strong AI it’ll be through precisely the kind of process you describe: achieving a high-level understanding of how the brain works and then re-implementing it from scratch using more conventional software techniques.
I’m listening to the original podcast now.
But I agree with you in that knowledge of the architecture and function of the brain is very likely to be a necessary prerequisite to engineering a substrate amenable to porting a wet brain onto; Thus, a substrate engineered along those principles simply to support AI will precede the engineering of a substrate capable of A) performing the “I” function plus B) being amenable to porting a wet brain onto…
fpg
“Hold on, there. What makes you so confident that you can just “scan my brain”, and run an exact simulation of it on an electronic substrate?”
You’re missing my point. I’m proposing this hypothetical experiment to make a distinction between simulating and predicting a system.
Doesn’t this discussion run together three questions? First, will it be possible soon, or fairly soon, to upload working copies of our brains? Second, whether a requirement of such an upload is that its behavior is predictable. Third, whether a good working electronic facsimile or me or you can be created soon, or fairly soon. The answer to the first question, for the reasons outlined here, is probably no. The answer to the second is “no”: an actual upload of our brains would create a system whose actions couldn’t be predicted by any feasible computer. Requiring that the electronic you do exactly what you would have done at every point in the future given the same external inputs is to impose an impossible requirement. The biological you probably wouldn’t behave in the identical way over time if we could do repeated experiments. The third–making a copy of you that would fool observers (including your wife and children)–seems doable in coming decades given the trajectory of technology. I think that’s the only actually interest question.
IMHO it’s a mistake to assume that our brains – parts of a real universe that we don’t understand the ultimate nature of – function like a deterministic computational system. It being part of a universe that has wavefunctions we don’t “get” the essential nature or logic of is one clue. It’s not a matter of the brain being supernatural but of it being “supermathematical” (more like, trans- …..) Many have tried to explain why the two aren’t equivalent, like Roger Penrose and David Chalmers. My own contribution to that is summarize here, read more at my name link: a computational, AI system would (perhaps very ironically) not be able to realize or have the thought that it was in a “real material universe instead of a mathematical model world. After all, all it can do is worth with numbers and represent the same operations on bits that would be part of a pure mathematical representation per se. If we can represent on paper, as math, all its thinking then it can’t be different in a materially real world and realize that it is – it would make modal realism have to be true, there would be no difference between a possible world and a ‘real one’, with indeed the latter distinction denied in MR and in the MUH. (Look up in Wikipedia etc.) But don’t we feel like we “really exist” in a way transcending just being a Platonic representation? Only a real brain can do that, and it can’t appreciate it using bits.
Should we be equating AI with human intelligence?
I think I should amend/expand my previous question.
I’ve long strongly believed that it’s far more likely that any AI we ever achieve will be closely related to human intelligence, for what I think are the obvious reasons.
Nevertheless, I do think that it is very highly unlikely that we will ever achieve a truly reductionist and comprehensive description of human intelligence, nor will merely simulating higher-level interactive modular processes be sufficient, either. I just don’t think we’re likely to be “smart” enough to accomplish either of those things for a very long time, if ever.
But it seems to me that we very well may be able to “grow” in some pseudo-organic fashion an intelligence that is somewhat (relative to our standard) human-like without ever understanding it any better than we do our own.
I agree with the criticisms of prediction above. I’m agnostic on the possibility of very stupidly assembling an enormous amount of data on how an actual human brain functions and then replicating that computationally and achieving some sort of simulation of the brain. I imagine that, eventually, something like that might be possible. I’m not sure how much closer that would get us to understanding intelligence (okay, well, it would allow a much greater degree of experimentation, assuming the ethical problems wouldn’t exist that surely would and, I hope, would be handled correctly). I don’t believe that prediction could be possible because I’m certain that the physics of the brain are not deterministic. But a simulation (without comprehension, which I doubt is fully possible) of a particular brain wouldn’t, in my opinion, be that terribly interesting, either, in terms of the larger issues here.
The NP-hardness concern shouldn’t be overstated. When nature folds proteins, it’s not actually using a procedure that will solve the NP-hard optimization problem in the worst case. It’s almost certainly using some kind of annealing process similar to algorithms used on computers, but one that’s hard to simulate (as you point out) because of its size, parallelism and multiscale dynamics. In particular, our models (whose minimum energies are NP-complete to find) may not reflect reality, or more likely, nature might not always find the lowest-energy configuration when it folds proteins.
More generally, biological processes can pretty much always be simulated in time roughly linear in the number of atoms involved, so there’s no question of the exponential overhead involved in solving NP-hard problems. It’s just that the constant might make the simulation impossible. (The only way I’m likely to be wrong is if quantum effects are important, in which case we’ll need quantum computers to do efficient simulations. But I suspect that the quantum effects in biological systems will have the kind of many-body entanglement that incurs exponential overhead to simulate.)
Good point.
Related to that, I think it’s entirely possible that we might figure out AI without successfully simulating human consciousness. I’ve heard of a number of proposals involving symbolic AI that might work out, although the mind created would probably be very alien compared to human thought processes.
Mr. Lee:
This post is much clearer than your last and your analogy of DNA seems much more apt. Although I can understand your impulse to debate Hanson on the immediate prospects of emulating or simulating or imitating or replicating the human brain’s activity in light of the proselytizing of the so-called Singularitarians, I would still maintain a healthy dose of optimism that humankind will, given enough time, overcome apparent barriers to simulating complex natural phenomena with computers.
As I alluded in my earlier post, it is my belief that, as a subset, binary and analog represent a clear dichotomy; the natural world and computers don’t. The ‘natural world’ is the Big Set. I would point you, for more on this, to the notion of the Technium by Kevin Kelley.
I would also like to include a personal note of thanks. I came to your blog via Andrew Sullivan’s and thereby started listening to EconTalk. You have an interesting body of work here and EconTalk is a great, great podcast.
Finishing up the podcast, I don’t really see you representing his claims accurately.
You said: “He claimed that we didn’t need to understand how the brain works because we can scan and “port” a brain to a digital substrate.”
A) Porting is not the only option he mentioned; he alluded to discovering the “Grand Theory of Intelligence” as being a path to AI… He simply didn’t dwell on it, given the fact that there’s nothing concrete to report as of right now. It’s something we know we don’t know right now…
B) I think he’s directing his comments to the most skeptical member of the audience. The porting technique represents the worst case scenario for producing a brainlike system, not that it’s an optimal or preferred scenario. It’s essentially an argument that there’s nothing more to mind & consciousness than chemistry, and if we can duplicate the wiring with sufficient fidelity, we’d be able to produce an artificial brain that behaves like the real thing. He couches every other word with qualifiers and continuously admits that it would require a LOT of detailed understanding to get it right… He’s essentially “bounding” the problem.
…
Ahh. I think I just got to what you refer to. I think you keyed on the “porting” comment not needing to know what the bits this and that… I think that particular comment explicitly excluded the other side of the coin, which is the cost of engineering of the emulator. Earlier, his analogy was: A) build an emulator, and B) Port the particular software to the emulator. If that’s the particular comment you keyed off on, perhaps there was some unstated context that satisfies your complaint?
fpg
Hey fpg. Thanks for reading. I guess I don’t see how any of this contradicts what I wrote. I didn’t deny that we might someday figure out how to build devices with human-like intelligence. My claim is that it’s not likely to be accomplished by “porting” the human brain. The fact that we might achieve strong AI using some other strategy is orthogonal to this debate.
I understand that he was bounding the problem, my point is simply that it’s not a very strict bound–that there’s no reason to think we’ll ever have enough knowledge or computing power to actually do what he describes. And in particular, that the implicit analogy to digital systems is misleading because a neuron is likely to be much less susceptible to emulation than a transistor is.
I actually don’t think emulating a neuron is the hard part. See IBM’s Blue Brain project… They’re supposedly modeling collections of neocortical columns with high fidelity as compared to the biological equivalent. (I don’t know offhand if that is a static model or accounts for synaptic formation/modification.)
My personal opinions: I feel that there’s enough plasticity, redundancy, and robustness to noise/injury/environment, etc. present in the architecture of the brain that the fidelity of a copy could be significantly less than 100% perfect to get worthwhile functionality from it. There’s at least two layers of psuedo-digitization process going on in the brain: A) At the micro level, the fundamental function of a neuron is to fire (i.e. binary on/off state) when sufficiently stimulated, and B) at a more macro level, where extremely noisy sensory or lower-level information is transformed into higher-level invariant representations: Sensing patterns in “pseudo-noise,” i.e. learning.
Due to redundancy, robustness, etc., the functional effect Y of less than perfect reproduction by factor X is not linearly correlated. I.e. a 50% reduction in connectivity fidelity may only result in a 5% reduction in function. (Google “sparse distributed representation”.)
If the artificial substrate is somewhere close to as good at sensing patterns in noise as the brain is, and we can give it a head start with an even imperfect set of patterns to start out with by scanning a real brain’s neuronal connectivity and firing criteria (synapse thresholds, etc.), perhaps the copy can “recover” from a temporary 10% reduction in memory fidelity quickly enough to start doing real work, as a new individual. (I don’t think Hanson was advocating the “copy” approach as a means of immortality for the copier, even if he did talk about the possibility of the copy being engineered to itself be immortal.)
But regardless, the substrate must be capable of the fundamental operation of brain-like pattern recognition to begin with to even support this process, so the whole “porting” issue really boils down to being a training shortcut; I could alternatively just choose to raise a “blank” learning substrate from scratch!
One issue with the “porting” approach, IMHO, is that for it to work properly “out of the box”, that everything has to match perfectly. I.e. the artificial eyeballs must be wired such that perceived images are related to the brain’ in the same manner as the original to elicit a similar response; The alternative is a retraining period similar to recovering from an injury, or adapting to a new prosthetic eye/ear for our flesh&blood brains. Again, the brain is incredibly plastic here!
http://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues
fpg
Does an analogy to gas molecules make sense in case of Hanson’s argument?
We know about the erratic behavior of individual gas molecules. However, on a macro scale we still can make prediction about the behavior of the whole system (say we can calculate the change in temperature based on the change in volume by applying additional pressure)
Is it not enough to just simulate enough input/output states to address most of the common scenarios. This is in no way complete simulation of the human brain but surely it is good enough to replace lot of physical labor with computation decision making.
The issue may boil down to simulation of (or the NP-hard intractability of simulating) strong interaction. The indicator here is the numerical (a.k.a. fermion etc.) sign problem, a.k.a. the N-body or many-body problem. The physics of the brain, from which higher-level functions presumably emerge if you’re dealing with AI, is condensed matter physics, and that essentially says it all. That cannot be comprehensively simulated thanks almost entirely to the sign problem.