10 June 2009

Building a Conscious Machine

Image Source
Trying to program consciousness into computers has been an ongoing multi-decadal abysmal failure. The "top-down" approach of programming artificial intelligence into digital computing architectures has been bogged down by the huge differences between how the human brain works to create the mind, and how the human mind works to create human artifacts.

Basically, humans are stupid. Humans are stupid for believing that their conscious, rational minds can encompass the complexity of their 100 billion neuron brains without getting their feet wet and their hands dirty in the blood, gore, and sinew of low level, bottom-up, emergent phenomena. A few researchers have been working from the netherworld of thought, and are making progress.
As Dartmouth neuroscientist and Director of the Brain Engineering Lab Richard Granger puts it, “The history of top-down-only approaches is spectacular failure. We learned a ton, but mainly we learned these approaches don’t work.”

Gerald Edelman, a Nobel Prize-winning neuroscientist and Chairman of Neurobiology at Scripps Research Institute, first described the neurobotics approach back in 1978. In his “Theory of Neuronal Group Selection,” Edelman essentially argued that any individual’s nervous system employs a selection system similar to natural selection, though operating with a different mechanism. “It’s obvious that the brain is a huge population of individual neurons,” says UC Irvine neuroscientist Jeff Krichmar. “Neuronal Group Selection meant we could apply population models to neuroscience, we could examine things at a systems’ level.” This systems approach became the architectural blueprint for moving neurobotics forward.

....The robots in Jeff Krichmar’s lab don’t look like much. CARL-1, his latest model, is a squat, white trash can contraption with a couple of shopping cart wheels bolted to its side, a video camera wired to the lid, and a couple of bunny ears taped on for good measure. But open up that lid and you’ll find something remarkable — the beginnings of a truly biological nervous system. CARL-1 has thousands of neurons and millions of synapses that, he says, “are just about the edge of the amount of size and complexity found in real brains.” Not surprisingly, robots built this way — using the same operating principles as our nervous system — are called neurobots.

Krichmar emphasizes that these artificial nervous systems are based upon neurobiological principles rather than computer models of how intelligence works. The first of those principles, as he describes it, is: “The brain is embodied in the body and the body is embedded in the environment — so we build brains and then we put these brains in bodies and then we let these bodies loose in an environment to see what happens,” This has become something of a foundational principle — and the great and complex challenge — of neurobotics.

When you embed a brain in a body, you get behavior not often found in other robots. _hplus
Other attempts to build a brain from the bottom up include the Swiss Blue Brain project. Blue Brain is trying to build the cortical columns of a rat, then perhaps the entire cortex of a rat. From there, who knows?

Jeff Hawkins' Hierarchical Temporal Memory starts at a higher level than Blue Brain, but still grapples with the low level, essential messiness of the birthing of thought.

The late Francisco Varela, and Mark Johnson, and others have struggled with the concept of the embodied mind for decades, while their colleagues in cognitive science and artificial intelligence were beating themselves up attempting the top-down approach to intelligent machines. In the robotics field, Rodney Brooks was among the first to take a bottom-up approach to building robot brains.

Some of our most brilliant scientists and engineers have crashed and burned in the attempt to program minds from the top down. The problem is a conceptual one, but it often takes decades of failed attempts before even the most brilliant researcher understands the source of his failure. High intelligence is no protection from conceptual blindness. Sometimes it only makes it worse.

In theoretical biology, there is the concept of autopoieses -- self organising, self constructing phenomena. Nanotechnology is learning the idea from biology, in an attempt at a pragmatic skipping over some basic steps in nano-construction by borrowing ideas from biology. Gerald Edelman -- a Nobel Prize winner in immunology -- took his mastery of biological ideas to cognitive science, and began applying autopoieses to cognitive machines. It was a good idea, and progress is being made with it.

Whether humans will learn to "grow minds" intact -- as a whole -- or whether they will grow mental modules that can combine with each other in various ways to create multiple kinds of minds, the concept of autopoieses will be key to the creation of intelligent machines.

No doubt we will apply modifications and elaborations to these incubated minds, using top-down programming methods, but the core intelligence will have been evolved. Most people of "between levels" status will never understand that their brains and their minds work differently. They don't need to understand.

For next level humans, the concept will be elementary, simply a starting point as intuitively obvious as the hardness of stone or the wetness of liquid water.

Intelligent machines are a distinct possibility for the near term -- twenty years or so. Of course, once intelligent machines begin to evolve and combine ... and re-combine ... and re-combine ... who can say where the process ends? That is why more intelligent, wiser, and broader perspective humans are vital to the future -- and very soon.

That's why we can't afford bad government, bad media, bad academia, and bad child-raising any longer. Because the clock is ticking. Despite the Obama depression, despite the global jihad, despite the looming intolerant Chinese hegemony ... the clock is ticking.

There are a lot of things that need paying attention to. Who will be paying attention in 50 years?

Labels: , ,

Bookmark and Share


Anonymous Anonymous said...


Al, you should check this guy out if you haven't. He does come with past bona fide's on working on this stuff for the fed's. His claims are extraordinary.

Wednesday, 10 June, 2009  
Blogger Bruce Hall said...

As a father of three and a grandfather twice over this year, I can appreciate the problems of a programmer trying to create a system capable of thought.

Watching my grandsons as infants reinforces the proposition that humans are not born with a "thought process" but rather a system capable of learning. Human brains seem far too incomplete in newborns to do anything more than "hardwired" activities: sucking, defecating, and sleeping/waking. The "hardwired" part is critical, but relegated to the secondary level as the brain adds more functionality [grows] and creates new networks. Computers lack this element and programmers try to compensate for human brain growth with programming.

The most difficult part of programming is the feedback loop which in humans are the five senses. They serve not only as initial input that can trigger a thought [computer calculation], but immediately and continuously reinforce or repudiate the thought. I would guess that a single thought involves many elements of feedback and alteration in an instant. After awhile, the brain can filter out the process as background noise and allow the infant to begin to integrate neuron firing with controlled action.

Just describing the generalities gives me a headache. I find it incredible that a programmer would attempt to generate both the computer logic and feedback process to reach a human "thought" such as "I'm hungry, but I don't like what I see in the refrigerator and I might want to go out for pizza later if the guys want to see the game at the bar unless... damn, I stubbed my toe and it might be broken and I don't think sitting in a bar will be much fun except a few cold ones might make me feel better except I can't drive and I need to get a ride but I forgot to get those emails out and that leftover sausage looks good."

Wednesday, 10 June, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts