The Impossible Optimism of the Quest for AGI
Brian Wang presents an interview with AI researcher Itamar Arel. Arel expresses an almost surreal optimism in predicting a "human-level AI" within 10 years.
Brian's interview with Arel is my first exposure to his thinking and research. It is encouraging to see bright young researchers trying novel approaches to the design and attempted implementation of machine intelligence.
...Question 8: Is reverse-engineering the human brain a necessary prerequisite for AGI?Be sure to watch the video interview on top of reading Brian's interview, to get a broader view of Arel's ideas. He clearly has learned a great deal from mistakes made by earlier AI researchers. Just as clearly, the new generation of AI researchers still have a great deal to learn.
Answer: There are two schools of thought on this subject. One school of thought advocates reverse-engineering the brain as a necessary precursor to creating a sentient machine. This is often referred to as "whole brain emulation". The other school of thought argues that replicating the human brain is an unnecessary task that would take decades. I agree with the later - there are quicker and easier ways to impart intelligence to a machine.
...Question 12: If you were given a petaflop supercomputer, could you create an AGI now?
Answer: The computational resources are actually readily available. We could probably achieve rudimentary AGI with a fairly modest cluster of servers. That is one of the main advantages of not trying to emulate the human brain - accurately simulating neurons and synapses requires prodigious quantities of compute power.
...Question 14: Assuming sufficient funding, how much progress do you anticipate by 2019?
Answer: With sufficient funding, I am confident that a breakthrough in AI could be demonstrated within 3 years. This breakthrough would result in the creation of a "baby" AI that would exhibit rudimentary sentience and would have the reasoning capabilities of a 3 year old child. Once a "baby" AI is created, funding issues should essentially disappear since it will be obvious at that point that AGI is finally within reach. So by 2019 we could see AGI equivalent to a human adult, and at that point it would only be a matter of time before superintelligent machines emerge. _BrianWang's Interview w/ Itamar Arel
The video interview went into more detail, with a brief discussion of the "embodied" nature of intelligence, and how that might apply to AI. Clearly it is an issue that Arel has thought about, and considers important to some degree. Arel also mentions the importance of a hierarchical approach to knowledge of the world, which is an idea that is pivotal to the "brain emulation" approaches to AI exemplified by Jeff Hawkins' group. Finally, Arel at least gives lip service to the importance of designing special hardware to implement intelligence in a machine.
Machine intelligence research is desperately in need of multi-disciplinary thinkers with a grounding in philosophy, neuroscience, cognitive psychology, mathematical modeling, and several other arbitrarily isolated fields of study. But such multi-disciplinarians are not being trained in any university. This makes the job harder, and take longer than it needs to. The job will get done, but not likely anytime close to the predictions of Arel and Kurzweil.
Arel's Roadmap Wiki
More Comprehensive PDF Description of AGI Roadmap
Wikipedia's AI article -- good overview of topic with links A good place to start for beginners.
Embodied Cognition from Wikipedia A crucial, but often omitted -- or underestimated -- aspect of making machines intelligent.
Most AGI researchers ignore the problem of consciousness, and the distinct differences between different types and levels of consciousness as exhibited by different levels of complexity of animal nervous systems in nature. Too many AI specialists assume that consciousness arises in brains due to the powerful and complex computational mechanisms of massively parallel neocortices. They make rough estimates as to the computational power of brains, and assume that machines with similar computational power will have to capacity for intelligence.
But that approach, of course, is absurd. By treating the brain as a "black box", and refusing to look closely at the contents and mechanisms of the box, too many AI researchers deprive themselves of crucial knowledge necessary for making those foundational breakthroughs. It is what comes from hyper-specialisation in modern academia.
Look for outsiders to make some of the pivotal breakthroughs in machine intelligence, simply because outsiders have not been steeped in all the approaches that are not likely to work.
Labels: artificial intelligence