Will The Singularity Founder on Differences Between the Brain and Computers?
The Singularity Summit is occurring on 8-9 September 2007 in San Francisco, this weekend. It will be attended by people who are largely optimistic about the chances for a Technological Singularity within the next 20 years, triggered by the development of artificial general intelligence (AGI). But is that optimism well supported? Do researchers in machine intelligence really understand intelligence well enough to create intelligent machines?
One metaphor that has seen a great deal of use among AGI hopefuls, is "the brain is a computer: a computer is a brain." Is that metaphor valid--or even useful--for creating an intelligent machine?
Chris Chatham, a grad student in cognitive science, and proprietor of Developing Intelligence blog, posted an interesting comparison between brains and computers at the end of March, 2007. It may be instructive to re-visit Chris' comparison:
I must urge anyone interested in this topic to read Chris' posting as linked above in its entirety, without the "snips." And by all means, read the fascinating comments, where you will almost inevitably find your own point of view, if you disagree with Chris' points above.
There other differences--perhaps as important as Chris' points, or even more so--that are worth discussing at a later time. My POV is that the conceptual basis for the necessary machine substrate for intelligence has not yet been conceptualized. Doing that will require extensive knowledge of how the human brain--the only known "intelligent" device in the known universe--actually achieves a modicum of intelligence.
Jeff Hawkins, author of On Intelligence, may have the best head start of anyone in the running. But Hawkins is not aiming for a "Singularity-spawning AGI." His immediate goals are much more modest--thus more likely to be achieved.
For anyone with an interest in the Tech Singularity, consider attending the Singularity Summit this weekend in SF, CA, if you can. Failing that, check Michael Anissimov, CRNano, or the Singularity Institute to find someone liveblogging the event.
The Singularity Institute has several publications online dealing with AGI. In my opinion, the approaches to AGI advocated in the SIAI readings are suitable mainly for "probabilistic co-processing", rather than for any "main processor" of consciousness. But read them yourself, and see how you feel.
If the people working on AGI are working with a faulty metaphor of how intelligence can be created, the road to the AGI-instigated singularity may be a long one.
More: For those who want to try to get up to speed on current attempts to approximate AGI, see here and here.
Even More: Here are online video lectures on machine learning/AI, and here is David MacKay's book on information theory, inference, and learning algorithms for download as an entire book, or as individual chapters.
One metaphor that has seen a great deal of use among AGI hopefuls, is "the brain is a computer: a computer is a brain." Is that metaphor valid--or even useful--for creating an intelligent machine?
Chris Chatham, a grad student in cognitive science, and proprietor of Developing Intelligence blog, posted an interesting comparison between brains and computers at the end of March, 2007. It may be instructive to re-visit Chris' comparison:
Difference # 1: Brains are analogue; computers are digitalDeveloping Intelligence
It's easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital "1's and 0's" belies a wide variety of continuous and non-linear processes that directly influence neuronal processing.
[snip]
Difference # 2: The brain uses content-addressable memory
In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through "spreading activation" from closely related concepts. For example, thinking of the word "fox" may automatically spread activation to memories related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite sex.
The end result is that your brain has a kind of "built-in Google," in which just a few cues (key words) are enough to cause a full memory to be retrieved. Of course, similar things can be done in computers, mostly by building massive indices of stored data, which then also need to be stored and searched through for the relevant information (incidentally, this is pretty much what Google does, with a few twists).
[snip]
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to seek out modularity in the brain. For example, the idea that computers require memory has lead some to seek for the "memory area," when in fact these distinctions are far more messy. One consequence of this over-simplification is that we are only now learning that "memory" regions (such as the hippocampus) are also important for imagination, the representation of novel goals, spatial navigation, and other diverse functions.
Similarly, one could imagine there being a "language module" in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca's area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca's area may also be involved in other computations (see here for more on this).
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
The speed of neural information processing is subject to a variety of constraints, including the time for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although there are individual differences in something psychometricians call "processing speed," this does not reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a microprocessor. Instead, psychometric "processing speed" probably indexes a heterogenous combination of all the speed constraints mentioned above.
[snip]
Difference # 5 - Short-term memory is not like RAM
Although the apparent similarities between RAM and short-term or "working" memory emboldened many early cognitive psychologists, a closer examination reveals strikingly important differences. Although RAM and short-term memory both seem to require power (sustained neuronal firing in the case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold only "pointers" to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk. (See here for more about "attentional pointers" in short term memory).
[snip]
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
For years it was tempting to imagine that the brain was the hardware on which a "mind program" or "mind software" is executing. This gave rise to a variety of abstract program-like models of cognition, in which the details of how the brain actually executed those programs was considered irrelevant, in the same way that a Java program can accomplish the same function as a C++ program.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain. Any abstract information processing account of cognition will always need to specify how neuronal architecture can implement those processes - otherwise, cognitive modeling is grossly underconstrained. Some blame this misunderstanding for the infamous failure of "symbolic AI."
Difference # 7: Synapses are far more complex than electrical logic gates
Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains might also operate on the basis of electrical signals (action potentials) traveling along individual logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are actually electrochemical in nature, meaning that they travel much more slowly than electrical signals in a computer, and that they can be modulated in myriad ways. For example, signal transmission is dependent not only on the putative "logical gates" of synaptic architecture but also by the presence of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites, and many other factors. This adds to the complexity of the processing taking place at each synapse - and it is therefore profoundly wrong to think that neurons function merely as transistors.
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain. As neurons process information they are also modifying their synapses - which are themselves the substrate of memory. As a result, retrieval from memory always slightly alters those memories (usually making them stronger, but sometimes making them less accurate - see here for more on this).
Difference # 9: The brain is a self-organizing system
This point follows naturally from the previous point - experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit - something known as "trauma-induced plasticity" kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
[snip]
Difference # 10: Brains have bodies
This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is "offloading" its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.
Bonus Difference: The brain is much, much bigger than any [current] computer
Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn't include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion. (See here for more on this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in raw computational power.
I must urge anyone interested in this topic to read Chris' posting as linked above in its entirety, without the "snips." And by all means, read the fascinating comments, where you will almost inevitably find your own point of view, if you disagree with Chris' points above.
There other differences--perhaps as important as Chris' points, or even more so--that are worth discussing at a later time. My POV is that the conceptual basis for the necessary machine substrate for intelligence has not yet been conceptualized. Doing that will require extensive knowledge of how the human brain--the only known "intelligent" device in the known universe--actually achieves a modicum of intelligence.
Jeff Hawkins, author of On Intelligence, may have the best head start of anyone in the running. But Hawkins is not aiming for a "Singularity-spawning AGI." His immediate goals are much more modest--thus more likely to be achieved.
For anyone with an interest in the Tech Singularity, consider attending the Singularity Summit this weekend in SF, CA, if you can. Failing that, check Michael Anissimov, CRNano, or the Singularity Institute to find someone liveblogging the event.
The Singularity Institute has several publications online dealing with AGI. In my opinion, the approaches to AGI advocated in the SIAI readings are suitable mainly for "probabilistic co-processing", rather than for any "main processor" of consciousness. But read them yourself, and see how you feel.
If the people working on AGI are working with a faulty metaphor of how intelligence can be created, the road to the AGI-instigated singularity may be a long one.
More: For those who want to try to get up to speed on current attempts to approximate AGI, see here and here.
Even More: Here are online video lectures on machine learning/AI, and here is David MacKay's book on information theory, inference, and learning algorithms for download as an entire book, or as individual chapters.
Labels: artificial intelligence, Intelligence, Singularity
5 Comments:
Stipulating my complete non-involvement in AGI research, I wouldn't be surprised if AGI develops as a result of our efforts to modify our own brains. You touched on some of the basic process in your recent cyborg post. Integrating non-biologic material with our own may result in enhanced mental capacity which, at some point, becomes effectively indistinguishable from grobiC AGI and may permit the intellectual insight necessary to actually achieving the thing itself.
Which should rather handily avoid the whole "friendly AI" question.
I suspect that artificial augmentation of human intelligence is likely to occur before a true AGI is operating. This will be done in the context of regenerative/rehabilitative medicine, for treatment of traumatic brain injury, stroke, dementia, etc.
Such aggressive brain rehab will probably involve stem cells, neural growth factors, electronic brain implants, neurochips with brain/machine interface, advanced neurofeedback, and other modalities that are nearly off-the-shelf at this time.
Taking the next step from brain rehab to the augmentation of a non-damaged brain should be fairly easy, given enough accumulated experience with modalities such as those listed above.
I've said it elsewhere so I'll say it here -- whichever comes first, the other won't be all that far behind at all.
As cognitive science develops, both AGI research and neural augmentation research are enhanced. As AGI research and neural augmentation research are developed, Cog.Sci. is enhanced.
It's circular.
Taking into account their "nearly off the shelf" status, it would make an interesting read to see a list of these efforts with a brief synopsis of how close to OTS they actually are. Maybe as a shared reference list-type structure on numerous blogs.
Until such "elective treatment" becomes accepted by professional medical licensing bodies, learning where - and at what approximate cost - such treatment is available to private individuals will be very much subject to dis-information efforts by providers of existing treatment modalities. A straight-forward list of links to treatment providers would move the decision process closer to the involved individual, I think. It would also tend to high-light how many are directing their efforts in which areas, useful information for researchers and investors, I think.
How's your google-fu? :)
The massive size of the human brain shows another reason why its "slow" operation is actually a huge benefit: heat dissipation.
Were anything with a comparable number of junctions to be made in silicon, it would generate an impossible-to-manage amount of heat; furthermore, there is absolutely no anticipated way of managing this issue with a silicon based semiconductor because there is a very real limit as to how small they can be made.
IBM has already fabricated a transistor made from only a few hundred atoms many many years ago. Thats about as small as its going to get, folks, and it still takes a hell of a lot more energy than is present in a few molecules of ATP to switch it on then back off again... basically, silicon simply can't complete with the carbon based organism for massively parallel tasks requiring even the brain power a dog has...
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home