18 December 2009

The Brain Is Far More Complex Than Believed

We have been bombarded with predictions that human-level artificial intelligence will be developed "within ten years." These predictions inevitably come from persons with a computer science, engineering, or artificial intelligence background. Such  prognosticators understand algorithms and / or electrical circuits, but do they understand how consciousness is created? Do they comprehend the basis for the only human-level intelligence that exists: the human brain? Clearly not.

This respected Harvard team of neuroscientists has its hands full studying a nematode nerve network of 4 measly neurons! They are hoping to expand their study to include more than 4 neurons soon.

Meanwhile, there is the problem of "The Other Brain", the glial network.
Glia communicate by broadcasting chemical messages. Moreover, glia can sense information flowing through neural circuits and alter the communications between neurons at synapses! Glia, we now know, have receptors for detecting the flow of ions generated by neurons firing electrical impulses and for sensing the neurotransmitters neurons release at synapses. Glia intercept these signals and act upon them to increase or decrease the transmission of information across synapses and speed or slow the transmission of electrical information through axons.

These recent discoveries open an entirely new dimension into brain function. Glia are involved in all aspects of nervous system health and disease. They can control neuronal communication, development of the fetal brain, generation of new neurons in the adult brain, participate in epilepsy, Alzheimer's disease, mental illnesses such as depression and schizophrenia, and they provide a new mechanism of learning that operates beyond synapses. _RDouglasFields

Henry Markram's Blue Brain project is the one supercomputer project that seems to be taking into account much (but not all) of the brain's complexity.   But Markram's project is not trying to create human-level AI.  It is trying to create a simulated brain that can be used to better understand brain function and diseases of the brain.   Markram is a neuroscientist, so his goals are focused more on the realities of the brain than on the goal of "human-level AI."

The best of the "brain simulations" by AI workers is the simulation by Dharmendra Modha's IBM team. It barely simulated one of the most basic functions of the visual cortex, at average speeds less than 1 / 100th that of a mammalian brain. It required over $1 million a year to power its processors, and much more to cool the supercomputer. For something with the same number of neurons as a cat brain, Modha's simulation was a hugely inefficient use of energy -- compared to a cat.

There are some interesting new hardware tools coming along, such as Chua's memristors, meminductors, and memcapacitors. Alice Parker's BioRC project at USC may hold some future promise.

The point is not that a human-level AI would have to emulate the human brain in great detail. That would be a rather pointless extravagance, when crafty abstraction can save space, time, energy, and effort. But new hardware and software tools are needed, obviously. One cannot build a slow-functioning silicon retard occupying a skyscraper-sized building requiring a nuclear reactor to power and mega-tons of refrigeration to cool, and call it a human-level brain.

Anyone wanting to build a human-level machine intelligence will need to understand much of what consciousness entails, and how the human brain achieves it.

It is possible that an insect-level machine brain of appropriate size and energy-consumption might be developed within 5 to 10 years. A rodent-level machine brain of appropriate dimensions might be developed within 10 to 20 years, if we are lucky. A human-level brain may not take more than 5 years to achieve after the rodent-level brain, but shrinking it to the size and energy-efficiency of a human brain may take longer.

Humans are stupid. But they are also the most powerful general-purpose intelligence known to us. Machine intelligence of human-level or better would generate significant changes to human societies. Modern conventional human governments do not want to lose control of this particular revolution. If they do, it might be the last mistake they ever make.

Labels: , , ,

Bookmark and Share


Blogger kurt9 said...

Al fin,

I like much of what you have to say about cognition and the likelihood of sentient AI, which is likely nil. I can add some more.

Brains are dynamically reconfigurable. The synaptic connection reconfigure themselves all the time (I think this occurs during sleep and is one of the reasons why we sleep). No semiconductor technology has this dynamism. FPGA's do not count as they are not reconfigurable in the same manner.

Memory storage is chemical in nature, not electronic. Synapses vary as to chemical type. Also, dendrites are not the only way neurons communicate with each other. They also use diffusion-based chemistry as well.

There are various kinds of memory storage. There is short-term storage, there is long-term potentiation, then there is the really long term memory which is still not understood. Both the various kinds of memory as well as communications are interactive with each other.

The software to simulate all of this would be so complex I cannot imagine it being done in the foreseeable future. Having computers that exceed the so-called computational capabilities of human brains is not difficult to imagine. By Moravec's estimates, we all ready have them. By Kurzweil's estimates, we will have them in 10 years. But brains works so incredibly different from semiconductor-based computers that such comparisons are essentially meaningless.

Friday, 18 December, 2009  
Blogger al fin said...

Thanks, Kurt. Good points.

Alice Parker is doing some interesting research on nanoscale artificial brains _PDF_ at USC.

Leon Chua's ideas may also lead to better thinking machines.

Like I said, I don't think a literal simulation of the brain to molecular levels (like Markram is doing) is the way to achieve a machine intelligence. But whoever does make the breakthroughs will need to understand the brain and human consciousness much better than most AI theorists do.

Years ago, I was optimistic about AI. I have since been disillusioned for its short term prospects, but remain hopeful for the long term.

Not many human brains can comprehend the full range of relevant ideas at the necessary level of rigour, to bring about a thinking machine.

Friday, 18 December, 2009  
Blogger yamahaeleven said...

Knowing hardly enough to even get into trouble, I cannot make any useful estimation on when/if a general artificial intelligence will achieve the remarkable milestone of emulating, without flaw, a human being. Looking into the past of human accomplishment, however, it seems to me that a slavish reproduction of how nature accomplishes a task is not required. For instance, a 747 does not flap its wings, yet it flies quite well.

I would hazard a guess that an early manifestation of an "intelligent" machine will be disruptive at first, seemingly crude and hardly usable (Wright Bros.), but cheap and does some of the job. Improvements occur slowly at first, then at some time may be recognizably intelligent, different from us, yet intelligent.

Friday, 18 December, 2009  
Blogger CarlBrannen said...

I won't argue against the fact that FPGAs are not sentient, but having designed circuitry on them for a couple decades I know that certain modern FPGAs can be reconfigured on the fly. It's not commonly done, but it's possible.

Let's see if I can find a link... Ah yes, partial reconfiguration.

Friday, 18 December, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts