Human vs. Computer: How About a Real Challenge?
Update 18Feb2011: AI researcher Ben Goertzel presents some thoughts on Watsons ascendancy at H+Magazine online. H/T Brian Wang
Much has been made about the recently televised Jeopardy victory by IBM Watson, and earlier victories and impressive performances by specialised chess computers such as Blue Gene etc. Even in poker, computers are increasingly seen as threats to human dominance, thanks to clever human programmers. In the game world, Go is most often seen as a game where computers have not come close to experts.
What do these massive, power gobbling, ultra-pampered, spoon-fed, one-trick-pony game-playing super computers tell us about the human vs computer rivalry? Realistically, the rivalry is still human vs. human, with one team of humans utilising ultra-fast electronics devices to store, "analyse", and retrieve massive quantities of data, to gang up on a single human opponent.
Watson is certainly faster to the button than its human opponents, but we already knew that electrons were faster than nerves. Once the "natural language processing" trick was mastered, Watson could more readily lock out his opponents from responding -- even when they both clearly knew the answer.
But Watson could not drive itself home after the game, could not flush away its excretions (heat) by itself, could not feed itself, etc. In the end, Watson is a very expensive gimmick which served as a showcase for various specialised programming problems.
Perhaps if Watson could master all the games mentioned above, at once, and defeat experts in all of the games, it would be impressive as a game-player. But not really. Look at all the money, mass, and energy tied up in the junkpile called Watson. How would IBM make it more capable of playing multiple games? By throwing more mass, money, and energy into the already-huge junkpile. Not very clever, really, compared with the three pound human brain and all the things it can do -- including designing, building, repairing, and programming "smart" computers.
It all points out the fact that the state of artificial intelligence is pretty pathetic, all in all. Despite over 60 years of promises to create human-level intelligence "within 10 years", AI still stinks badly, and promises more of the same into the forseeable future.
The Jeopardy challenge -- like all similar challenges -- was a huge and expensive publicity hullabaloo. It is quite likely to damage the Jeopardy brand in the long run. It certainly puts forth an entirely false idea about the modern capability of computers, vis-a-vis humans, to reason and make decisions.
What would be a real challenge for Watson? How about a spontaneous, unplanned race over an extensive, lengthy, novel, 3-D obstacle course with ladders, walls, tunnels, slides, sand, and foot-deep water traps -- against a 5 year old human child?
Let's face it: Modern life requires humans to overtly or covertly (via proxies) partner with computers to achieve optimum performance in large areas of our lives. But what will it take to get computers to the point where they are consciously setting the agenda for humans, rather than the other way around?
It will take an entirely new "substrate of thought" than the high speed digital architectures currently used to such great -- if ultra-specialised -- effect. Worse, modern AI researchers for the most part have no idea what form such a new substrate would take. Certainly they do not understand the substrate for the only proof of concept of conscious intelligence which currently exists -- the human brain.
Too much like robots themselves, too many AI researchers unwittingly plod along artificial pathways leading to nowhere but diminutive local optima. Watson is only one illustration of the kludgy phenomenon.
What will it take, and how long will it take to discover it? There are limits to pure reason and speculation. Experimentation is necessary. Hands must be dirtied and hypotheses must be generated and tested. For the luggiest of lugheads out there, we need much better challenges than chess, Jeopardy -- or even Go -- to spur the effort required.
Much has been made about the recently televised Jeopardy victory by IBM Watson, and earlier victories and impressive performances by specialised chess computers such as Blue Gene etc. Even in poker, computers are increasingly seen as threats to human dominance, thanks to clever human programmers. In the game world, Go is most often seen as a game where computers have not come close to experts.
What do these massive, power gobbling, ultra-pampered, spoon-fed, one-trick-pony game-playing super computers tell us about the human vs computer rivalry? Realistically, the rivalry is still human vs. human, with one team of humans utilising ultra-fast electronics devices to store, "analyse", and retrieve massive quantities of data, to gang up on a single human opponent.
Watson is certainly faster to the button than its human opponents, but we already knew that electrons were faster than nerves. Once the "natural language processing" trick was mastered, Watson could more readily lock out his opponents from responding -- even when they both clearly knew the answer.
But Watson could not drive itself home after the game, could not flush away its excretions (heat) by itself, could not feed itself, etc. In the end, Watson is a very expensive gimmick which served as a showcase for various specialised programming problems.
Perhaps if Watson could master all the games mentioned above, at once, and defeat experts in all of the games, it would be impressive as a game-player. But not really. Look at all the money, mass, and energy tied up in the junkpile called Watson. How would IBM make it more capable of playing multiple games? By throwing more mass, money, and energy into the already-huge junkpile. Not very clever, really, compared with the three pound human brain and all the things it can do -- including designing, building, repairing, and programming "smart" computers.
It all points out the fact that the state of artificial intelligence is pretty pathetic, all in all. Despite over 60 years of promises to create human-level intelligence "within 10 years", AI still stinks badly, and promises more of the same into the forseeable future.
The Jeopardy challenge -- like all similar challenges -- was a huge and expensive publicity hullabaloo. It is quite likely to damage the Jeopardy brand in the long run. It certainly puts forth an entirely false idea about the modern capability of computers, vis-a-vis humans, to reason and make decisions.
What would be a real challenge for Watson? How about a spontaneous, unplanned race over an extensive, lengthy, novel, 3-D obstacle course with ladders, walls, tunnels, slides, sand, and foot-deep water traps -- against a 5 year old human child?
Let's face it: Modern life requires humans to overtly or covertly (via proxies) partner with computers to achieve optimum performance in large areas of our lives. But what will it take to get computers to the point where they are consciously setting the agenda for humans, rather than the other way around?
It will take an entirely new "substrate of thought" than the high speed digital architectures currently used to such great -- if ultra-specialised -- effect. Worse, modern AI researchers for the most part have no idea what form such a new substrate would take. Certainly they do not understand the substrate for the only proof of concept of conscious intelligence which currently exists -- the human brain.
Too much like robots themselves, too many AI researchers unwittingly plod along artificial pathways leading to nowhere but diminutive local optima. Watson is only one illustration of the kludgy phenomenon.
What will it take, and how long will it take to discover it? There are limits to pure reason and speculation. Experimentation is necessary. Hands must be dirtied and hypotheses must be generated and tested. For the luggiest of lugheads out there, we need much better challenges than chess, Jeopardy -- or even Go -- to spur the effort required.
Labels: artificial intelligence, Blue Brain
14 Comments:
I agree. I think A.I. has a long way to go before it becomes real.
You're probably slightly harsh there, Al. Answering questions on Jeopardy! is a significantly more general task than playing chess.
Then again, computers don't really know how to form concepts and reason with them yet. But to say that
"It will take an entirely new "substrate of thought" than the high speed digital architectures currently used to such great -- if ultra-specialised -- effect. "
is probably an exaggeration.
Dave: Sometimes there is good reason to be harsh. In my opinion, this is one of those times.
Of course, we can always give AI a pat on the back, and wake up 60 years from now wondering why human-level AI has still not appeared.
Kurt: "a long way to go . . ." can take on different meanings. Sometimes you are on the right road, and just need to keep going the same direction for "a long way."
Other times, you are not only on the wrong road, but on the wrong planet or in the wrong galaxy. In cases such as that, a change of approach is necessary.
The problem of conscious intelligence is a lot harder than most casual observers and AI people seem to think. But the reason it's so hard is because the approach is so completely wrong. Change the approach -- the substrate -- and it gets easier.
As long as we are impressed by gimmicks like Watson, we deserve to be flim-flammed.
Honestly, until supercomputers surpass the computational powers of the brain, we can't be disappointed. It looks like Watson is still about 100 times slower than the brain (http://www.hpcwire.com/features/Must-See-TV-IBM-Watson-Heads-for-Jeopardy-Showdown-115684499.html).
If we get to the point where every cell phone has the computational powers of the brain and still aren't making progress, that'd be a problem.
I'll tell you what's encouraging: people like Jeff Hawkins complaining that they need more computational power. This implies that their approach is similarly complex to brain processes.
If Watson thought that Toronto was in the US in that Final Jeopardy match, then it has a long way to go to be able to differentiate between a process of elimination common knowledge reasoning and super computing through databases for that obscure fact that only an encyclopedia would know. It's just a glorified calcualtor! Even advancing multi core technology is pretty much wasted, as it's like having a factory capable to fill 1000 coke bottles, yet it's only used to fill 10 bottles. Speed and multitasking doesn't equal sentience. I'm not worried that robots will destroy us, but that smart people who happen to be idiots will.
I've heard that even google has some pretty groovy AI cooking inside it, but the reason they are not handing the keys over to it is the "catastrophic failures", like the example of Toronto in the US.
Nathan: The devs have explained this, watson largely ignores hints in the category, because sometimes they are irrelevant or confusing, also, the second choice of watson was chicago, and whereas watson only reads out the top answer in the jeopardy game mode, its core is still being a search engine.
Dave: Just matching the brains computing power is not enough, you need some software to run on it too. And whereas i'm convinced a neuronal simulation AI-equivalent will eventually arise i'm quite certain it could be achived in some more elegant manner before that.
al fin: While watson is hardly much of an AI it is still quite impressive and puts on a good show, especially so for the layperson not all that much into AI. Which i would argue is a good thing as it creates an optimistic view of AI and may quite likely funnel more funding into the field, including those projects that tries a new/different approach. And while this could of course lead to a new AI bubble i'd like to belive that the state of technology and the previous lessons of AI-failure would lead actual practical results this time around.
oh come on (al).
You *are* being way too harsh here, way too harsh. Its a cliche, but assuming no catastrophic failure, exponential laws should take watson the price of watson down by a factor of 100 in 10 years time. billion dollars today? 10 million dollars in 10 years, 100 thousand in 20.
Given special purpose circuits (ie: sketching watson's algorithms into special purpose chips should make the price/performance ratio go down much faster. Everything I've seen and read suggests it.
The important thing was to get the algorithm correct, and they are doing that in spades as far as I can see. After that it's a simple matter of implementation details.
Ed
I'm familiar with computer algorithms, but as far as I can see, algorithms have very little to do with human consciousness or general purpose intelligence.
In my experience, anyone who uses the term "algorithm" to describe a useful approach to general purpose intelligence, is barking up the wrong tree.
Generic computing power as determined by computational problem-solving speed, likewise has little to do with human consciousness.
That is what is so hard to get through people's heads who are approaching AI from a computer science viewpoint -- either by training or by reading about AI secondhand.
If it were a matter of computer power, you could hook a dozen Watsons together and create a superhuman AI. No such luck.
Unfortunately, the problem is no more soluble from the viewpoint of neuroscience. Right substrate, but wrong level of logic.
Sophisticated neuroimaging technologies are beginning to point toward an approach which may be workable over the long run.
Remember, just because basic thinking seems so easy to you, doesn't mean that it really is easy for anything except a human brain -- no matter how sophisticated the machine seems, or how many "impressive" things it can do which seem hard to you.
People are running the AI show, not computers. There are no popular approaches to AI at this point in time which are likely to change that balance of power.
Exponential improvements in computing speed or reductions in cost are not the answer, regardless of what Ray Kurzweil suggests. There is always a wall to be run into eventually when you depend upon speed.
Al,
The point is that at a certain point, it doesn't matter whether or not the programming denotes intelligence, it's the results.
Picture a generic computer, 20 years from now, one that has:
1. chinook's descendent
2. deep fritz's descendent
3. mogo's descendant
4. TD-Gammon's descendent
5. dragon's naturally speaking descendent
6. a series of computer vision modules,
7. watson's descendant.
8. all packaged in a heartland robotic descendant's body.
etc. etc. etc. It has thousands of cores and is multi-tasking, uses orders of magnitude less processing for its CPU than today.
Does it matter that its not fully aware or conscious? At some point, it probably could outperform humans at every 'lower level' task (any board game, any sport) by switching software.
It may not be self-aware, but the people behind it - and who direct its efforts - certainly are, and that combination is going to be exceedingly more potent than what we have now.
Decision making will become more and more computerized, and ultimately what is making the decision will become somewhat irrelevant. Because the decisions even though made by humans, could not have been made without machine help.
BTW - that's why I sort of feel sorry for those terrorist idiots in Pakistan or Afghanistan. They have *no idea* what's coming. After a certain threshold of CPU price/performance is realized, drones will become smaller and smaller, and decision making on whether to 'take out' a given target will become decentralized and move to them.
Does it really matter that those drones aren't 'really' intelligent and only have modules for image detection and audio matching and not the whole shebang? No (actually, probably a good thing). And the terrorists are just as dead in any case.
"but that smart people who happen to be idiots will."
WELL PUT.
Watson beats me at jeopardy, but so too do badly coded tic tac toe programs even without me losing patience.
But the thing is the questions I'd provide couldn't even be followed by humans, aliens, or even god himself. Yet they'd be extraordinarily relevant, my associative abilities are on peak human, some would say mind-boggling levels of ability.
A:
The game of jeopardy.
Q:
It fails me because I'm so good at jeopardy that watson beats me, and even a retard would beat me at jeopardy. But why is it so?
Because it is just a game, that's why, just like the computer is just a tool.
So what?
I play single player jeopardy with my self?
YES. OR no?
Some good comments. Thanks to all. I do not intend to be harsh toward commenters.
Machine intelligence, like most important topics, deserves better than most of the best that top-level humans have applied toward it so far. Institutional groupthink and associational groupthink is largely responsible for that.
horos22: Yes, that is pretty much how it looks to me right now.
DP: In Jeopardy, as in life, the question is the answer.
Al,
No offense, but I don't think that the issue was you being harsh at the commenters, I think it was you being harsh at your assessment of AI.
I ultimately *do* think we will reach the threshold of true AI, no Watson isn't the answer but it is at least part of the answer, the path towards getting there.
The 'ultra pampered, spoon fed monstrosity' that you see today is likely to be a $199.00 best-buy special tomorrow, and - if used right, we'll all be the better for it.
Ed
Thanks, Ed. I understand your point of view, and even felt the same way once.
As horos22 points out, even though AI will not likely achieve human-level general intelligence within the lifetime of Kurzweil or most current readers of Kurzweil, advances in AGI will put a lot more power into the hands of governments and savvier persons in the private sector. That can be good as long as this brave new symbiosis is directed toward an open and abundant future.
On the darker side, definitions of "terrorist" or "enemies of the state" are likely to evolve over time, until someone you know -- even someone very close to you -- may find themselves in the crosshairs of AI weapons systems, of either lethal or non-lethal types.
These powerful new AI-enabled, quasi-autonomous weapons systems can be incredible "force-extenders" for anyone who happens to control them, against anyonw who happens to irritate the persons in control.
No need to consider what happens if a New Symbionese Liberation Army or the like gets its hands on Watson XC.
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home