Image SourceTrying to program consciousness into computers has been an ongoing multi-decadal abysmal failure. The "top-down" approach of programming artificial intelligence into digital computing architectures has been bogged down by the huge differences between how the human brain works to create the mind, and how the human mind works to create human artifacts.
Basically, humans are stupid. Humans are stupid for believing that their conscious, rational minds can encompass the complexity of their 100 billion neuron brains without getting their feet wet and their hands dirty in the blood, gore, and sinew of low level, bottom-up, emergent phenomena. A few researchers have been working from the netherworld of thought, and are making progress.
As Dartmouth neuroscientist and Director of the Brain Engineering Lab Richard Granger puts it, “The history of top-down-only approaches is spectacular failure. We learned a ton, but mainly we learned these approaches don’t work.”
Gerald Edelman, a Nobel Prize-winning neuroscientist and Chairman of Neurobiology at Scripps Research Institute, first described the neurobotics approach back in 1978. In his “Theory of Neuronal Group Selection,” Edelman essentially argued that any individual’s nervous system employs a selection system similar to natural selection, though operating with a different mechanism. “It’s obvious that the brain is a huge population of individual neurons,” says UC Irvine neuroscientist Jeff Krichmar. “Neuronal Group Selection meant we could apply population models to neuroscience, we could examine things at a systems’ level.” This systems approach became the architectural blueprint for moving neurobotics forward.
....The robots in Jeff Krichmar’s lab don’t look like much. CARL-1, his latest model, is a squat, white trash can contraption with a couple of shopping cart wheels bolted to its side, a video camera wired to the lid, and a couple of bunny ears taped on for good measure. But open up that lid and you’ll find something remarkable — the beginnings of a truly biological nervous system. CARL-1 has thousands of neurons and millions of synapses that, he says, “are just about the edge of the amount of size and complexity found in real brains.” Not surprisingly, robots built this way — using the same operating principles as our nervous system — are called neurobots.
Krichmar emphasizes that these artificial nervous systems are based upon neurobiological principles rather than computer models of how intelligence works. The first of those principles, as he describes it, is: “The brain is embodied in the body and the body is embedded in the environment — so we build brains and then we put these brains in bodies and then we let these bodies loose in an environment to see what happens,” This has become something of a foundational principle — and the great and complex challenge — of neurobotics.
When you embed a brain in a body, you get behavior not often found in other robots. _hplus
Other attempts to build a brain from the bottom up include the Swiss
Blue Brain project. Blue Brain is trying to build the cortical columns of a rat, then perhaps the entire cortex of a rat. From there, who knows?
Jeff Hawkins'
Hierarchical Temporal Memory starts at a higher level than Blue Brain, but still grapples with the low level, essential messiness of the birthing of thought.
The late
Francisco Varela, and
Mark Johnson, and others have struggled with the concept of the embodied mind for decades, while their colleagues in cognitive science and artificial intelligence were beating themselves up attempting the top-down approach to intelligent machines. In the robotics field,
Rodney Brooks was among the first to take a bottom-up approach to building robot brains.
Some of our most brilliant scientists and engineers have crashed and burned in the attempt to program minds from the top down. The problem is a conceptual one, but it often takes decades of failed attempts before even the most brilliant researcher understands the source of his failure. High intelligence is no protection from conceptual blindness. Sometimes it only makes it worse.
In theoretical biology, there is the concept of autopoieses -- self organising, self constructing phenomena. Nanotechnology is learning the idea from biology, in an attempt at a pragmatic skipping over some basic steps in nano-construction by borrowing ideas from biology. Gerald Edelman -- a Nobel Prize winner in immunology -- took his mastery of biological ideas to cognitive science, and began applying autopoieses to cognitive machines. It was a good idea, and progress is being made with it.
Whether humans will learn to "grow minds" intact -- as a whole -- or whether they will grow mental modules that can combine with each other in various ways to create multiple kinds of minds, the concept of autopoieses will be key to the creation of intelligent machines.
No doubt we will apply modifications and elaborations to these incubated minds, using top-down programming methods, but the core intelligence will have been evolved. Most people of "between levels" status will never understand that their brains and their minds work differently. They don't need to understand.
For next level humans, the concept will be elementary, simply a starting point as intuitively obvious as the hardness of stone or the wetness of liquid water.
Intelligent machines are a distinct possibility for the near term -- twenty years or so. Of course, once intelligent machines begin to evolve and combine ... and re-combine ... and re-combine ... who can say where the process ends? That is why more intelligent, wiser, and broader perspective humans are vital to the future -- and very soon.
That's why
we can't afford bad government, bad media, bad academia, and bad child-raising any longer. Because the clock is ticking. Despite the Obama depression, despite the global jihad, despite the looming intolerant Chinese hegemony ... the clock is ticking.
There are a lot of things that need paying attention to. Who will be paying attention in 50 years?
Labels: consciousness, Gerald Edelman, robots