Artificial Intelligence: Experts Admit that AI Stinks
Marvin Minsky, Patrick Winston, Noam Chomsky, and other thinkers, researchers, engineers, and scientists, think that artificial intelligence has gone wrong. And the way things have been going, it is not likely that AI researchers will get on "the right track" for some time -- since most of them cannot seem to understand the problem.
Several up and coming AI workers (plus Ray Kurzweil) have predicted that human-level AI will be achieved within the next 10 to 20 years. Where is the progress which they can point to, in order to justify these claims? Within their own imaginations, apparently. According to some of the thinkers and AI pioneers quoted at the link above, the problem of AI is not being approached correctly. One of the most piercing criticisms of the field seems to be that it is focusing on tactics rather than strategy.
As brilliant as earlier AI researchers a John McCarthy, Marvin Minsky, Seymour Papert may be (and may have once been) in their hay-day, the extent of the problem of human-level AI was too poorly defined for anyone to grasp the challenge.
Modern researchers who make boastful claims for the likelihood of achieving human-level AI in 10-20 years do not have that excuse. Clearly neither Kurzweil nor the other AI hopefuls truly have a grasp of the problem as a whole. And that failure will be their undoing.
What would it take to succeed at human-level AI? A closely-knit, multidisciplinary team of thinkers willing to try outlandish and counter-intuitive approaches to the problem -- well out of the mainstream. To achieve human-type machine intelligence, high level expertise in one's field is but the barest beginning, hardly a single step in the journey of a thousand miles. One must also have a child-like approach to the world, be both brilliant and incredibly humble in the face of the ineffable, and be able to integrate complex ideas from both within and without his own field of expertise into a new, working holism.
Of course it sounds like psycho-babble, but when approaching something far too complex for words, such failures of communication are inevitable. Al Fin cognitive engineers recommend that the concepts of "embodied cognition" and "preverbal metaphor" be kept foremost in in the minds of any hopeful AI developers.
For everyone else, don't get your hopes up too high. Higher education in the advanced world -- particularly in North America -- is moving into serious difficulty, which means that research facilities and funding are likely to be cut back, perhaps severely. The economies of the advanced nations, and the US in particular, are being badly mismanaged, which means that private sector efforts will also likely be cut back. The problem may require the emergence of an individual with the brilliance of Einstein, the persistence of Edison, and the wide-ranging coherent creativity of a da Vinci.
In other words, people need to become smarter -- at least some people. Then they can learn to evolve very smart machines. And perhaps to interface with networks of these very smart machines. Then, possibly, using this symbiotic intelligence to design even smarter people.
Because let's face it: Humans -- especially PC-cultured humans -- will only take us to the Idiocracy, sooner or later. And the Idiocracy will be not only stupid, but downright brutal.
Several up and coming AI workers (plus Ray Kurzweil) have predicted that human-level AI will be achieved within the next 10 to 20 years. Where is the progress which they can point to, in order to justify these claims? Within their own imaginations, apparently. According to some of the thinkers and AI pioneers quoted at the link above, the problem of AI is not being approached correctly. One of the most piercing criticisms of the field seems to be that it is focusing on tactics rather than strategy.
...clearly, the AI problem is nowhere near being solved. Why? For the most part, the answer is simple: no one is really trying to solve it. This may come as a surprise to people outside the field. What have all those AI researchers been doing all these years? The reality is that they have largely given up on the grand ambitions of AI and are instead working on increasingly specialized subproblems: not just machine learning or natural-language understanding, say, but issues within those areas, like classifying objects or parsing sentences. _TechnologyReviewThis is clearly true. What is tragic is that a large proportion of the current crop of top-level AI researchers do not seem to understand the difference between strategy and tactics, in the context of AI. Strategy in AI calls for much deeper level thinking than tactics, which may in fact be beyond the capacity of most researchers -- and even beyond the range of philosophers such as Dan Dennett, who has relatively low expectations for human-level AI in the foreseeable future.
As brilliant as earlier AI researchers a John McCarthy, Marvin Minsky, Seymour Papert may be (and may have once been) in their hay-day, the extent of the problem of human-level AI was too poorly defined for anyone to grasp the challenge.
Modern researchers who make boastful claims for the likelihood of achieving human-level AI in 10-20 years do not have that excuse. Clearly neither Kurzweil nor the other AI hopefuls truly have a grasp of the problem as a whole. And that failure will be their undoing.
What would it take to succeed at human-level AI? A closely-knit, multidisciplinary team of thinkers willing to try outlandish and counter-intuitive approaches to the problem -- well out of the mainstream. To achieve human-type machine intelligence, high level expertise in one's field is but the barest beginning, hardly a single step in the journey of a thousand miles. One must also have a child-like approach to the world, be both brilliant and incredibly humble in the face of the ineffable, and be able to integrate complex ideas from both within and without his own field of expertise into a new, working holism.
Of course it sounds like psycho-babble, but when approaching something far too complex for words, such failures of communication are inevitable. Al Fin cognitive engineers recommend that the concepts of "embodied cognition" and "preverbal metaphor" be kept foremost in in the minds of any hopeful AI developers.
For everyone else, don't get your hopes up too high. Higher education in the advanced world -- particularly in North America -- is moving into serious difficulty, which means that research facilities and funding are likely to be cut back, perhaps severely. The economies of the advanced nations, and the US in particular, are being badly mismanaged, which means that private sector efforts will also likely be cut back. The problem may require the emergence of an individual with the brilliance of Einstein, the persistence of Edison, and the wide-ranging coherent creativity of a da Vinci.
In other words, people need to become smarter -- at least some people. Then they can learn to evolve very smart machines. And perhaps to interface with networks of these very smart machines. Then, possibly, using this symbiotic intelligence to design even smarter people.
Because let's face it: Humans -- especially PC-cultured humans -- will only take us to the Idiocracy, sooner or later. And the Idiocracy will be not only stupid, but downright brutal.
Labels: artificial intelligence, Idiocracy
6 Comments:
A strategy for achieving human level AGI is to build a basic structure that has sub-components constructed to perform individual capabilities (read, write, see, hear etc) and a central unit that selects between each as required to perform a given task. Then add sub-components possessed of additional capabilities such that you continuously need to enhance the central unit. Keep expanding and refining the basic construct and you will eventually replicate the necessary accreation of ability to develop a self-learning capability. At that point you will have copied the basic abilities of a new-born human that can be taught just like a human can.
There's your strategy, copy the human evolutionary development process at as compressed a time scale as technology and money permits. Tactics tend to originate as spontaneous responses to specific stimuli that succeed anyway, so stop worrying about developing them in advance of need.
I predict the initial construct will be equal in cubic volume to the USS Ronald Reagan with twice the power requirements.
You seem to have an axe to grind with AI researchers. They never promised anything , they do not owe anything , they will get there when they ll get there.
There are plenty of people smart enough to design AI.Its a world of 7 bill ppl, and there are awful lot of smart ones even if its only 0.01% of them
If one have not noticed stellar advancements of AI in past 20 years he must been deaf, blind and , most importantly, ignorant
Human level AI is endgame -there are many milestones to hit before it will become a possibility but the progress so far been steady.
Some people doing work and progress , and some just do talking how work and progress are "impossible".
Very interesting, Al!
I fear that, until computers can attain the complexity of the human brain they won't develop as promised. Doubtless there is a reason why the human brain does NOT use microprocessors, so speed is not going to substitute.
Roger Penrose pointed out that there is no theory of the brain; it seems to work on it's own scientific laws, not classical nor quantum mechanics. Until we understand that, we will not succeed in this quest.
Will: Good one! To continue the analogy, the aircraft carrier would take 10,000 years to get from Norfolk to Liverpool, at a cost of 10^20 2005 US dollars per crossing.
Max: You could be right. The criticism here is directed toward those who project unrealistic timelines for the creation of human-level AI. As for the uber-technical tacticians who euphemistically call themselves "AI researchers," the best of luck to them.
Perhaps they will eventually get to human level AI, and perhaps -- forgive my irreverence -- a band of monkeys on wordprocessors, tapping randomly, will eventually create the works of Shakespeare in their entirety and in proper grammar and order of creation.
Timothy: Thanks. As you point out, the lack of a broad-perspective theory and strategy for achieving brain-level functioning dooms the field of "artificial intelligence" to wallowing in the mire.
Al: Interesting perspective. I just started a graduate program in AI with an interest in just what you described, artificial general intelligence.
Having just entered the community a year ago, I would agree with your assessment that not many people are trying to accomplish the original goals of the community.
There is one primary reason for this. When researchers first made attempts at creating an artificial intelligence they appeared to make drastic progress. They created systems for proving theorems, playing games, answering standardized test questions, etc. This lead these very same researchers to make bold promises such as systems to translate natural language, autonomous agents, machines that could pass a Turing test, etc. When large amount of government funding was given to the community they realized that these goals were much harder then they appeared and little progress was made.
Then the lighthill report came out in the 70's (video of the debate lead by Lighthill). It basically highlighted how little progress had been made and funding to the programs was cut.
This history is ingrained in the back of every AI researcher's mind. I think that researchers are hesitant to make the same mistake of pursuing apparently easy goals that are actually quite hard (some would even go so far as to say impossible).
Thus, enter the age of expert systems (wired's essay on the ai revolution). Systems which are so constrained that their successful development is nearly guaranteed.
I believe for the most part that this is totally ridiculous. We need to get over our fear as a community and start tackling the problems originally proposed by the founders of the field. I'm interested in pursuing this goal but all of my professors and advisors look at me as some naive graduate student, who obviously doesn't know better. But I share this opinion with many of my fellow AI grad students so I think a new generation is coming.
That being said, have hope. There are people interested and actively working on this. My research now is on computational models of cognition and it is quite close to some of these original goals.
Good post. I'll be reading your blog more often.
Thanks for an insightful comment, Christopher. Good luck in your studies. The challenge is huge, but the goal is worthy.
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home