27 April 2010

Brains Like Ours?

Terrence Sejnowski is a Princeton trained physicist who found his way into neurobiology via a Harvard postdoc.  He is currently the head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in La Jolla.  Sejnowski suggests that humans may begin to create "brains like ours" sooner than most people think.
Last November, IBM researcher Dharmendra Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. This news took many by surprise, since he had leapfrogged over the mouse brain and beaten other groups to this milestone. For this work, Modha won the prestigious ACM Gordon Bell prize, which is awarded to recognize outstanding achievement in high-performance computing applications.

However, his audacious claim was challenged by Henry Markram, a neuroscientist at the Ecole Polytechnique Fédérale de Lausanne and the leader of the Blue Brain project, who announced in 2009 that: "It is not impossible to build a human brain and we can do it in 10 years.". In an open letter to IBM Chief Technical Officer Bernard Meyerson, Markram accused Modha of “mass deception” and called his paper a “hoax” and a “scam.”

...Unfortunately, the large-scale simulations from both groups at present resemble sleep rhythms or epilepsy far more closely than they resemble cat behavior, since neither has sensory inputs or motor outputs. They are also missing essential subcortical structures, – such as the cerebellum that organizes movements, the amygdala that creates emotional states and the spinal cord that runs the musculature. Nonetheless, from Modha’s model we are learning how to program large-scale parallel architectures to perform simulations that scale up to the large numbers of neurons and synapses in real brains. From Markram’s models, we are learning how to integrate many levels of detail into these models. In his paper, Modha predicts that the largest supercomputer will be able to simulate the basic elements of a human brain in real time by 2019, so apparently he and Markram agree on this date; however, at best these simulations will resemble a baby brain, or perhaps a psychotic one.... _SciAm
And from there, Sejnowski unfortunately veers off to briefly discuss "intelligent communications systems", then quickly ends his article. In other words, Sejnowski doesn't actually tell us anything about when we might expect to create "brains like ours."

Ray Kurzweil predicts human level computing by 2029, but Mitch Kapor is betting that Ray is wrong. Henry Markram's prediction for an artificial human brain by 2019 goes far beyond what the Blue Brain website is willing to predict, or what some of his more sober colleagues at Lausanne are willing to claim. Ben Goertzel and Peter Voss each believe they are on the trail of artificial general intelligence (AGI), and see no reason why they cannot achieve their goal.

Noah Goodman -- a researcher at MIT's Cognitive Science Group -- is quite forthcoming and honest in this interview at Brian Wang's NextBigFuture. Goodman says that "we could achieve human-level AI within 30 or 40 years", but he also admits that it could take longer.

A startling new approach to massively parallel computing comes from Michigan Technological University working with a research team in Japan.
In their work, instead of wiring single molecules/CA cells one-by-one, the researchers directly build a molecular switch assembly where ∼300 molecules continuously exchange information among themselves to generate the solution. This molecular assembly functions similarly to the graph paper of von Neumann, where excess electrons move like colored dots on the surface, driven by the variation of free energy that leads to emergent computing...

...By separating a monolayer from the metal ground with an additional monolayer, the NIMS/MTU team developed a generalized approach to make the assembly sensitive to the encoded problem. The assembly adapts itself automatically for a new problem and redefines the CA rules in a unique way to generate the corresponding solution.

"You could say that we have realized organic monolayers with an IQ" says Bandyopadhyay. "Our monolayer has intelligence."

Furthermore, he points out that this molecular processor heals itself if there is any defect. It achieves this remarkable self-healing property from the self-organizing ability of the molecular monolayer.

"No existing man-made computer has this property, but our brain does: if a neuron dies, another neuron takes over its function" he says.

With such remarkable processors that can replicate natural phenomena at the atomic scale researchers will be able to solve problems that are beyond the power of current computers. Especially ill-defined problems, like the prediction of natural calamities, prediction of diseases, and Artificial Intelligence, will benefit from the huge instantaneous parallelism of these molecular templates.

According to Bandyopadhyay, robots will become much more intelligent and creative than today if his team's molecular computing paradigm is adopted. _Nanowerk
Most intelligent observers of the AI field who have been able to take a step back and view the phenomenon from the perspective of many decades of history, are forced to conclude that a new physical substrate -- other than von Neumann architecture supercomputers -- will be necessary before anything close to a human-level AGI can be built.

Whether using memristors, qubits, molecular monolayers, fuzzy logic - enabled neural nets with genetic algorithmic ability, or physical substrates and architectures not yet envisioned or announced, AGI researchers of the near to intermediate future will eventually make rapid strides toward useful machine intelligence -- once the right architectural substrate is discovered.

In the meantime, cognitive scientists are learning a great deal about how the brain works, and how artificial mechanisms may better emulate brain function. I suspect that both Modha and Markram (along with several other prognosticators including Kurzweil) may have allowed wishful thinking to get the better of them, when making timeline predictions.

In the dramatic history of genetic science, only after the breakthrough of Watson and Crick could molecular biology explode into the present and future. The ongoing history of artificial intelligence is still lacking its "Watson and Crick." Progress can be made, but not the explosive progress that is necessary to approach human level intelligence.

Labels: , ,

Bookmark and Share

5 Comments:

Blogger kurt9 said...

I think 40 years is a plausible time frame for the development of human AI. I think it unlikely sooner than that.

Tuesday, 27 April, 2010  
Blogger Dave said...

Thanks for the post Al - very informative. One question - you say:

'The ongoing history of artificial intelligence is still lacking its "Watson and Crick." Progress can be made, but not the explosive progress that is necessary to approach human level intelligence.'

What exactly would you want this discovery to look like? In other words, how would we know that a discovery is this significant? I ask because I feel that discoveries of this magnitude are being made so often we fail to recognize them as such.

Wednesday, 28 April, 2010  
Blogger al fin said...

Yes. Most people will only recognise the "Watson and Crick event" in retrospect.

But those with a multidisciplinary background in consciousness studies, neuroscience, and cognitive science will probably recognise the transition immediately.

From that point on, things should snowball rather rapidly.

Wednesday, 28 April, 2010  
Anonymous Anonymous said...

Brains like ours - meaning human like? What a bad idea. We should either aim for something a lot better or make use of cat level AI which would be just low enough to avoid being blinded by its firmly held ideologies. Or we could do both. Let's just avoid anything at the human level.

Frankly, human level intelligence is the worst possible level: too low to avoid stupidity but high enough to pursue that stupidity with a dangerous level of cleverness and determination. Racoons, monkeys and humans are the reason the word "pesky" is useful.

Wednesday, 28 April, 2010  
Blogger al fin said...

Baron, you've been reading too much Vonnegut. Of course, I happen to agree with the logic.

Unfortunately, there is probably no "right" level of intelligence.

Intelligence, wisdom, executive function, grit, character, creativity, empathy and social cohesion, etc. You can't separate just one feature, and then call it human.

Thursday, 29 April, 2010  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``