04 December 2006

Thinking, Feeling Machines--Clean Your House, and More

Modern household robots might be able to vacuum your carpet, but can they pick up empty pizza cartons and assemble your Ikea furniture? Soon they may be able to.
Modern microelectronic technology can work at amazing speeds and store increasingly unbelievable quantities of data. But modern digital computers lack "common sense", and emotion based gut feelings and intuition.

Computer Science pioneer Marvin Minsky has some ideas on how one might insert emotions into computers, to enhance their range of abilities.

Q So here you are, a pioneer of artificial intelligence, writing a book about emotions. What's going on?

A Somehow, most theories of how the mind works have gotten confused by trying to divide the mind in a simple way.

My view is that the reason we're so good at things is not that we have the best way but because we have so many ways, so when any one of them fails, you can switch to another way of thinking. So instead of thinking of the mind as basically a rational process which is distorted by emotion, or colored and made more exciting by emotion -- that's the conventional view -- emotions themselves are different ways to think. Being angry is a very useful way to solve problems, for instance, by intimidating an opponent or getting rid of people who bother you.

The theme of the book is really resourcefulness and why are people so much better at controlling the world than animals are? The argument is: because they have far more different ways to think than any competitor.

Q What, then, is the most important thing for us to understand about our own thinking?

A Your mind can work on several levels at once so, when you think about any particular subject, you also can think about the way you've been thinking -- and then use that experience to change yourself. Similarly, when you admire some teacher or leader, you can try to imitate their ways to think -- instead of just learning the things that they say.

Other luminaries of computer science and electronics also hold strong opinions on the possibility of creating a conscious machine. At a recent debate at MIT between Ray Kurzweil and David Gelertner, the audience was treated to two divergent viewpoints from two brilliant veterans of computer science and electronic invention.

"We'll have systems that have the suppleness of the human brain," Kurzweil said, adding that to contemplate how those machines will be developed, it's important to accept that current software and computing power aren't up to the task and that technological advances are necessary first. So, it's important to look out 20 or so years.

Humans will recognize the intelligence of such machines because "the machines will be very clever and they'll get mad at us if we don't," he joked.

Gelernter smiled at that, but he also shook his head. He's not buying it because logically any machine that is programmed to mimic human feelings, which are an aspect of consciousness, is programmed to lie because a machine cannot feel what a human feels. That's the case even if the machine seems to be able to "think" like a human.

"It's clear that you don't just think with your brain," he said. "You think with your body."

Kurzweil noted that recently a computer simulated protein folding, which is something that was believed to be impossible for a machine to do, suggesting that it's difficult to predict what machines will be capable of doing. Gelernter had an answer for that, too -- that's all that happened, just the simulation of the folding, the process stopped there.

"You can simulate a rainstorm and nobody gets wet," he said to use another example.

I understand both points of view, but I suspect that within 20 years, machines will be able to emulate emotions quite well, and not have to "lie" to do it.

It is when the discussion turns to unmanned combat vehicles that many people start to become wary of computers and robots. Nevertheless, research on development of unmanned combat robots, and unmanned aerial combat vehicles (UCAV) is proceeding apace.

Qinetiq says the architecture under development is intended to allow for the control of multiple, self-organising UAVs or UCAVs by a single operator.

The approach is intended to facilitate human in the loop involvement in critical mission phases, such as a ground strike by multiple UCAVs in a combat environment. In a civil scenario, that critical human operator role could potentially involve the positive identification and recovery of a survivor in a search and rescue operation carried out by multiple UAVs controlled from a helicopter.

Proving the ability to distribute decision making between the operator and the individual air vehicles to reduce workloads is a key focus of the research programme.

The human-machine interface would be critical in such a cooperative effort, as would the autonomous software built into the unmanned vehicles.

The advantages humans have over machines at this point, derive from the massively parallel architecture of the human brain, from intuitive and emotional thinking processes, and from the mind-body integration that has evolved over billions of years. Human scientists and inventors are busy attempting to negate those advantages, so that machines can do as much as and more than humans--only faster, more tirelessly, and with more precision.

Personally, I am in favour of giving humans more advantages--such as even greater intelligence, more physical strength and speed, and longer lives so as to gain more wisdom from greater experience.

Labels: , ,

Bookmark and Share


Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

Links to this post:

Create a Link

<< Home

Newer Posts Older Posts