04 October 2012

Artificial Intelligence Needs a New Philosophical Foundation

Artificial intelligence has turned into something of a laggard and a laughingstock in the cognitive science community. Human level AI always seems to be "10 to 20 years away," and has been for most of the past 60 + years. Oxford physicist David Deutsch thinks it is long past time for AI to be built upon a better philosophical foundation and superstructure, which is going to require paying much closer attention to what human level intelligence actually is.
The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of "artificial general intelligence" or AGI – has made no progress whatever during the entire six decades of its existence.

Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. _David Deutsch
Some of you may have caught one of Deutsch's logical errors in the paragraph just above. By invoking "the universality of computation," Deutsch falls into the "algorithmic trap." He is in very good company, but by falling into such an elementary trap so soon in his essay, he is already on the way to failure.
...why has the field not progressed? In my view it is because, as an unknown sage once remarked, "it ain't what we don't know that causes trouble, it's what we know that just ain't so." I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

In 1950, Alan Turing expected that by the year 2000, "one will be able to speak of machines thinking without expecting to be contradicted." In 1968, Arthur C Clarke expected it by 2001. Yet today, in 2012, no one is any better at programming an AGI than Turing himself would have been.

...Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain's unique functionality is beyond the scope of this article.

...The lack of progress in AGI is due to a severe log jam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose "thinking" is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

Clearing this log jam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever. _David Deustch

Deutsch's essay above illustrates something important about human intelligence: An intelligent person can detect another person's errors much more easily than he can detect his own.

The example that Deutsch uses to illustrate that solving the problem of the qualitative difference of AGI "cannot be all that difficult," is the difference between the DNA of humans and the DNA of chimpanzees. Deutsch claims that the number of differences between the DNA of the two species "is relatively tiny." But that is wrong.

Human genetics is not even close to understanding the differences in the genes and gene expression between two humans, much less the differences between genus homo and genus pan. Deutsch is minimising the extent of a problem that is still poorly defined. That error is peripheral to his argument, but it is a good example of the human tendency to gloss over problems which lie far outside of one's own specialty.

Deutsch is absolutely correct that human level AI is qualitatively different from anything that has yet been conceived -- or at least published -- by AI researchers. And he is correct that the problem requires an entirely new philosophical approach -- perhaps many new approaches to capture the problem well enough to evolve a working AGI.

Deutsch alludes to Popperian Epistemology, which is key to the philosophy of science. As Popper once stated in the first Darwin Lecture at Darwin College, Cambridge:
My position, very briefly, is this. I am on the side of science and of rationality, but I am against those exaggerated claims for science that have sometimes been, rightly, denounced as "scientism". I am on the side of the search for truth, of intellectual daring in the search for truth; but I am against intellectual arrogance, and especially against the misconceived claim that we have the truth in our pockets, or that we can approach certainty. _Karl Popper

In his essay, Deutsch provides several important insights into mistakes that people make when thinking about and discussing AI. Even in his own basic misconceptions, Deutsch illustrates his own cautions and criticisms quite well, somehow adding validity to his underlying argument that the AI enterprise needs a new philosophical underpinning.

When it comes to humans, we can usually safely say "everything you think you know, just ain't so." What you think you know may have more or less in common with reality -- but usually much less. So it is with the enterprise of artificial intelligence, and the attempt to reasonably emulate human intelligence in a machine. When one starts out with the wrong assumptions and premises, it doesn't take long to become very badly lost in the woods. A display of intellectual arrogance only makes one's "lostness" all the more absurd.

Deutsch understands this, and helps to flag the problem so that other thinkers can provide partial solutions. Perhaps sometime in the future, either humans or machines can take these partial solutions and assemble a suitable workaround.

Labels:

Bookmark and Share

4 Comments:

Blogger MnMark said...

I'm going to suggest an entirely different explanation for human intelligence: intelligence is a function of consciousness, and consciousness is a non-physical "thing" that uses a brain and a body to interact in this physical plane. The reality of this is discoverable through meditation techniques and other similar esoteric practices. We humans beings are consciousnesses that use these bodies from birth until death, and continue to exist and grow after the death of these bodies. We live lives through many bodies until we have learned the things we came here to learn, and then we don't incarnate here anymore.

And all forms of life have consciousness, though their expression here in this physical world is with different purposes than human consciousness.

This is why science will never make a machine that duplicates human intelligence and creativity. It's like studying a car to try to understand drivers of cars. You are confusing the steering wheel, gas pedals, brakes, etc, with the driver. You're making the assumption that there are only cars, no such thing as drivers, and that if you just understand cars well enough you will be able to create a model of a car that drives itself intelligently.

This is the fundamental misunderstanding that is similar to trying to treat illness by treating "ill humours": the belief that consciousness is a function of the physical matter of the brain rather than an incredible sort of energy (which we may never understand, given that we are that energy) that uses the physical brain for expression.

I understand the skepticism of materialists who want scientific proof of something that is non-physical and thus unprovable, but all I can say is that if they will seriously take up the meditative disciplines they will 'see' for themselves.

Thursday, 04 October, 2012  
Blogger Bearhawk said...

I often think about how I solve "hard" problems. Sometimes it is deliberate where I can sit and bull my way through, understand the constraints and work within those parameters. The more elegant solutions, however, tend to come after I've stepped away from the desk. Sometimes it coalesces as I verbalize & consequently organize my thoughts around literally articulating the problem. More interesting to me is when the Eureka moment happens, typically in the morning after sleeping on it or in the shower with the water splashing down. I can't write it down or sketch it fast enough! The elegant problems seem a lot more nonlinear, I can't state them as quantum solving, but it feels like the usual steps in progressive logic, etc get skipped. I can go back and fill those in afterwards, but they aren't there with the solution initially.

Thursday, 04 October, 2012  
Blogger Eric said...

re Dreams: When we sleep our grasp on reality softens. You can still think and picture things, say to interact with a horse talking on a cellphone about the Presidential Election. But the fact that there is a talking horse holding the cellphone doesn't usually surprise you.

Most AI researchers aren't touching the Psi problem, but there isn't any reason to assume it cannot be recreated or imbued into things.

re original article:

A great deal of funding for AI style research has come from DARPA in one form or another. What does DARPA focus on? The future soldier program, part of which is a language translator. That means computational linguistics. What else? Intelligence analysis. That means Bayesian Reasoning. They don't do this to create an AGI, they do it to create better tools. Tools are valued higher than intelligence.

Even companies like Google, with founders who are obsessed with AI, end up focusing on lots of other things than "pure" AGI research:
http://www.artificialbrains.com/google-x-lab#founder-quotes-on-AI

(By the way, you can see their statements that they are focused not on algorithm's or parallelism, but raw computation)

Here's an open source AGI approach, it glues together lots of different algorithm's and mimics the brain's massive parallelism:
http://www.youtube.com/watch?v=x18yaOXBSQA

Thursday, 04 October, 2012  
Blogger neil craig said...

I'm not sure if the AI researchers are trying to produce artificial intelligence so much as to produce artificial human style intelligence.

There are some things at which computers are more intelligent than ys (playing chess, calculating orbits) and some at which tigers are (sneaking up on deer and indeed humans). In many ways autism appears to be a human imitation of the sort of intelligence computers do.

Friday, 05 October, 2012  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``