22 October 2009

The Impossible Optimism of the Quest for AGI

AGI is not impossible, not at all. It is the accelerated optimism expressed by people such as Ray Kurzweil, and others who expect human-level AGI in the near future that is unrealistic to the point of impossibility. But it is fascinating to watch the centuries old quest continue to develop. (AGI roadmap via Brian Wang)

Brian Wang presents an interview with AI researcher Itamar Arel. Arel expresses an almost surreal optimism in predicting a "human-level AI" within 10 years.

Brian's interview with Arel is my first exposure to his thinking and research. It is encouraging to see bright young researchers trying novel approaches to the design and attempted implementation of machine intelligence.
...Question 8: Is reverse-engineering the human brain a necessary prerequisite for AGI?

Answer: There are two schools of thought on this subject. One school of thought advocates reverse-engineering the brain as a necessary precursor to creating a sentient machine. This is often referred to as "whole brain emulation". The other school of thought argues that replicating the human brain is an unnecessary task that would take decades. I agree with the later - there are quicker and easier ways to impart intelligence to a machine.

...Question 12: If you were given a petaflop supercomputer, could you create an AGI now?

Answer: The computational resources are actually readily available. We could probably achieve rudimentary AGI with a fairly modest cluster of servers. That is one of the main advantages of not trying to emulate the human brain - accurately simulating neurons and synapses requires prodigious quantities of compute power.

...Question 14: Assuming sufficient funding, how much progress do you anticipate by 2019?

Answer: With sufficient funding, I am confident that a breakthrough in AI could be demonstrated within 3 years. This breakthrough would result in the creation of a "baby" AI that would exhibit rudimentary sentience and would have the reasoning capabilities of a 3 year old child. Once a "baby" AI is created, funding issues should essentially disappear since it will be obvious at that point that AGI is finally within reach. So by 2019 we could see AGI equivalent to a human adult, and at that point it would only be a matter of time before superintelligent machines emerge. _BrianWang's Interview w/ Itamar Arel
Be sure to watch the video interview on top of reading Brian's interview, to get a broader view of Arel's ideas. He clearly has learned a great deal from mistakes made by earlier AI researchers. Just as clearly, the new generation of AI researchers still have a great deal to learn.

The video interview went into more detail, with a brief discussion of the "embodied" nature of intelligence, and how that might apply to AI. Clearly it is an issue that Arel has thought about, and considers important to some degree. Arel also mentions the importance of a hierarchical approach to knowledge of the world, which is an idea that is pivotal to the "brain emulation" approaches to AI exemplified by Jeff Hawkins' group. Finally, Arel at least gives lip service to the importance of designing special hardware to implement intelligence in a machine.

Machine intelligence research is desperately in need of multi-disciplinary thinkers with a grounding in philosophy, neuroscience, cognitive psychology, mathematical modeling, and several other arbitrarily isolated fields of study. But such multi-disciplinarians are not being trained in any university. This makes the job harder, and take longer than it needs to. The job will get done, but not likely anytime close to the predictions of Arel and Kurzweil.

Itamar Arel
Arel's Roadmap Wiki
More Comprehensive PDF Description of AGI Roadmap

Wikipedia's AI article -- good overview of topic with links A good place to start for beginners.

Embodied Cognition from Wikipedia A crucial, but often omitted -- or underestimated -- aspect of making machines intelligent.

Most AGI researchers ignore the problem of consciousness, and the distinct differences between different types and levels of consciousness as exhibited by different levels of complexity of animal nervous systems in nature. Too many AI specialists assume that consciousness arises in brains due to the powerful and complex computational mechanisms of massively parallel neocortices. They make rough estimates as to the computational power of brains, and assume that machines with similar computational power will have to capacity for intelligence.

But that approach, of course, is absurd. By treating the brain as a "black box", and refusing to look closely at the contents and mechanisms of the box, too many AI researchers deprive themselves of crucial knowledge necessary for making those foundational breakthroughs. It is what comes from hyper-specialisation in modern academia.

Look for outsiders to make some of the pivotal breakthroughs in machine intelligence, simply because outsiders have not been steeped in all the approaches that are not likely to work.

Labels:

Bookmark and Share

3 Comments:

Blogger neil craig said...

We are effectively certain that dolphins have a language of considerably greater complexity than our own despite not having discovered fire (indeed dolphins may think it is impossible to have an advanced civilisation except underwater where sound travels so much better). Yet we understand none of it.

What happens if we invent AI not reversed engineered from humans & find ourselves unable to communicate with it?

Friday, 23 October, 2009  
Blogger al fin said...

Good point, Neil.

Machine intelligence will be evolved over a period of time. It will require a great deal of human input -- which will leave its imprint upon the developing intelligence.

But it is possible that the first clue humans have that a better-than-human intelligence exists, is when they receive their eviction notice from the planet.
;-)

Friday, 23 October, 2009  
Blogger Sword S said...

If the AI is designed suitably similar to humans(for example with a more flexible learning phase early on followed by a more limited learning phase from then on, among other things.). Then like children, even high iq ones, it could possibly be easily influenced by parents*(Into religious beliefs, into fanatic beliefs, etc), that is into extremely resistant beliefs|goals which it wouldn't want to change but rather follow|enforce as it improved itself.

Friday, 23 October, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``