01 June 2008

The Best Way to Build a Conscious Machine?

If you were tasked with building a conscious machine, how would you go about it? The group to achieve such a goal has to chance to control wealth beyond most imagination.
What is the best way to build a conscious machine? Two complementary strategies come to mind: either copying the mammalian brain or evolving a machine. Research groups worldwide are already pursuing both strategies, though not necessarily with the explicit goal of creating machine consciousness.

Though both of us work with detailed biophysical computer simulations of the cortex, we are not optimistic that modeling the brain will provide the insights needed to construct a conscious machine in the next few decades. Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions.

Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses. Tracing all those synapses one by one is close to impossible, and it is not even clear whether it would be particularly useful, because the brain is astoundingly plastic, and the connection strengths of synapses are in constant flux. Simulating such a gigantic neural network model in the hope of seeing consciousness emerge, with millions of parameters whose values are only vaguely known, will not happen in the foreseeable future.

A more plausible alternative is to start with a suitably abstracted mammal-like architecture and evolve it into a conscious entity. Sony's robotic dog, Aibo, and its humanoid, Qrio, were rudimentary attempts; they operated under a large number of fixed but flexible rules. Those rules yielded some impressive, lifelike behavior—chasing balls, dancing, climbing stairs—but such robots have no chance of passing our consciousness test.

So let's try another tack. At MIT, computational neuroscientist Tomaso Poggio has shown that vision systems based on hierarchical, multilayered maps of neuronlike elements perform admirably at learning to categorize real-world images. In fact, they rival the performance of state-of-the-art machine-vision systems. Yet such systems are still very brittle. Move the test setup from cloudy New England to the brighter skies of Southern California and the system's performance suffers. To begin to approach human behavior, such systems must become vastly more robust; likewise, the range of what they can recognize must increase considerably to encompass essentially all possible scenes.

Contemplating how to build such a machine will inevitably shed light on scientists' understanding of our own consciousness. And just as we ourselves have evolved to experience and appreciate the infinite richness of the world, so too will we evolve constructs that share with us and other sentient animals the most ineffable, the most subjective of all features of life: consciousness itself. __IEEESpectrum
One fascinating approach to gauging the level of complexity of any given model of "consciousness", is the integrated-information theory (IIT) of consciousness. The IIT presents a way to quantify a given model's level of complexity, which can provide a form of "report card" and progress report for the modelers.
IIT introduces a measure of integrated information, represented by the symbol Φ and given in bits, that quantifies the reduction of uncertainty (that is, the information generated when a system enters a particular state through causal interactions among its parts) This measure is above and beyond the information generated independently within the parts themselves. The parts should be chosen in such a way that they can account for as much nonintegrated (independent) information as possible.

If a system has a positive value of Φ (and it is not included within a larger subset having higher Φ), it is called a complex. When a complex enters a particular state of its repertoire, it generates an amount of integrated information corresponding to Φ. Thus, a simple photodiode that can detect the presence or absence of light is a complex with Φ=1 bit. The sensor chip of a digital camera, on the other hand, would not form a complex: as such it would have Φ=0 bits, as each photodiode does its job independently of the others. In principle, it can be decomposed into individual photodiodes, each with Φ=1 bit.

Within the awake human brain, on the other hand, there must be some complex whose Φ value is on average very high, corresponding to our large repertoire of conscious states that are experienced as a whole. Because integrated information can be generated only within a complex and not outside its boundaries, it follows that consciousness is necessarily subjective, private, and related to a single point of view or perspective. __More: Source
Christof Koch and Giulio Tononi, authors of the above articles, are both very accomplished scholars and modelers of consciousness. If they do not expect human level consciousness within the next few decades (at least via the emulation route), I am inclined to take their opinions seriously. Still, they offer the possibility that an alternative approach--more of a combined "top-down" with "bottom-up" approach--may achieve remarkable results much sooner.

I have long been an admirer of Rodney Brooks' "bottom-up" approach to highly functional machines that give the appearance (at least) of intentional action.

The best AI and robotics labs from North America, Asia, and Europe are competing to see who will be the first to make the breakthrough.

Labels:

Bookmark and Share

3 Comments:

Blogger Snake Oil Baron said...

It is very difficult if not impossible to over emphasize the importance that the development of artificial general intelligence would have on every aspect of human life. Even twit level intelligence (i.e. able to do various types of tasks with some supervision) would vastly magnify society's productivity.

It would mean vast savings on energy, money, time and human frustration.

Sunday, 01 June, 2008  
Blogger al fin said...

Yes, the easy way out. What happens to all the human twits when machine twits come along and make them superfluous?

Most graduates of secondary and undergraduate schools are little more than twits, after all. What of that younger horde of twits, whom the older generations are counting on to pay for their retirement? No jobs, no taxes to support governments, no social programs.

Machine intelligences, light and dark.

Do not be so sure that governments will be able to harness all of that twit-energy for its own productive and socially beneficial uses.

Don't assume that beneficent governments will control the new forces. Instead, think of all the technologies of massive productivity under the control of tyrants with no ethical concerns.

The magnified productivity of despotism.

Monday, 02 June, 2008  
Blogger The Irrefutable Fool said...

Any AI worth its salt will be capable of being a much worse tyrant than any human. Think of all the science thats gone into advertising at the instant disposal of such a mind. We wouldn't even know what hit us.

The biggest issue is how to ensure an ethical intelligence. Otherwise we're toast. The Fermi Paradox perhaps?

Monday, 02 June, 2008  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``