08 September 2009

How is Higher Reasoning Like a Bowel Movement?

Or, Why Computers Can Never Think, Since They Have No Guts

The crux of the matter is the type of reasoning that people use, compared to the type of reasoning that computers use. Computers use deductive reasoning almost exclusively. Humans, on the other hand, use a wide array of reasoning strategies -- including deduction. Humans unconsciously use inference and induction to build their beliefs, prejudices, and modes of operation. Humans use their entire bodies to think, not just their brains.

From their first breath (and earlier), humans use their body sensations to build inferences about the world around them. At first, the child is aware only of internal sensations and the feeling of body functions.
Lakoff and Nunez (1997, 2000) proposed a theory of embodied learning in mathematics where cognition is situated in the mind and developed through psychological and biological processes. Grounding and linking metaphors support the development of schema. These are influenced both by the body and the environment and develop understanding of mathematical ideas..... Mathematics is viewed as human imagination where mathematical reasoning is based on bodily experiences (Johnson, 1987). _CarolMurphy U of Exeter PDF
Human thought is "grounded" to body sensations and "gut feelings" via "grounding metaphors." These metaphors are not based upon language, since they are laid down long before the infant has begun to learn language. Later, observations of his own and others' movements are integrated into the child's inferential data base. Then, as he learns language, the unconscious metaphors take on a more formal aspect which can be analysed using linguistic theory.

You may be starting to see how radically different basic human reasoning is from the formal deductive logic that computer codes and machines incorporate and use.

Let's look at inductive reasoning.
How do humans reason in situations that are complicated or ill-defined? Modern psychology tells us that as humans we are only moderately good at deductive logic, and we make only moderate use of it. But we are superb at seeing or recognizing or matching patterns—behaviors that confer obvious evolutionary benefits. In problems of complication then, we look for patterns; and we simplify the problem by using these to construct temporary internal models or hypotheses or schemata to work with.1 We carry out localized deductions based on our current hypotheses and act on them. And, as feedback from the environment comes in, we may strengthen or weaken our beliefs in our current hypotheses, discarding some when they cease to perform, and replacing them as needed with new ones. In other words, where we cannot fully reason or lack full definition of the problem, we use simple models to fill the gaps in our understanding. Such behavior is inductive. _ Inductive Reasoning Arthur Santa Fe InstPDF _ via _SimoleonSense
Some modern efforts to emulate human cognition in machines are making use of "pattern matching" algorithms and neural nets, with some limited success. Pattern matching is very basic thinking strategy, that is utilised in a closely related strategy -- analogy. Computers can use these primitive strategies to "self-expand" and "self-organise" a data base.

These simple strategies can be augmented with Bayesian inference modules, in an attempt to maintain inferential discipline, as it were.

But in the absence of "grounding metaphors", imaginative and intuitive computers will simply not know when to stop. Available memory will fill up with junk, and the system will crash. Computer programmers and designers simply cannot take that chance, for now.

Humans have bodily needs and bodily functions. If these needs and functions are not accommodated, the system may crash. This is why body states are monitored so closely, and take on such a central importance from the earliest moment of a human being's existence. Human cognition must be grounded in the body, or the body may stop functioning. The "grounding metaphors" are hardwired into the system, and everything else is kludged around them.

Artificial Intelligence researchers sometimes imagine that they can dispense with all the "legacy microcode" and create a cognition from original principles. Back in the 1950s, they predicted human-level computation within a matter of a decade or two. By the 1980s and 1990s, AI people were beginning to acquire a bit of sensible caution -- although by and large they had still not figured out what was frustrating their goals. They figured out that pure logic computing, eg Prolog, wasn't going to work alone, but why in blazes couldn't they make more progress with LISP?

Then along came the connectionism and neural nets, and then genetic algorithms and fuzzy logic. All very useful, and all finding places in the overall strategy of understanding natural and artificial cognition. Neural nets could learn to match patterns, genetic algorithms could learn to optimise designs and strategies, and fuzzy logic could function effortlessly in environments that would crash formal deductive systems.

So what is the problem? If Henry Markram can say with a straight face that the human brain could be replicated within 10 years, given enough funding, why are investors not beating a path to the door of the Brain Mind Institute?

Because these "10 year" predictions are very easy to make. But most of them do not pan out. And because Markram is the only one making such grandiose claims. Jeff Hawkins thinks he can achieve great things within 10 years, but he hasn't claimed the ability to replicate the human brain. Of course, if Markram could "replicate" a rat brain within 10 years, his fame and fortune would be assured. Providing, of course, the resulting "brain" occupied a space no larger than an iPod. ;-)

Machine intelligence projects have created some amazing software. You can expect to experience much more amazement from that direction. But will computers actually think in the way we think of thinking?

Yes. But it will require a central grounding mechanism that forms a "sticky core" to which the different "thinking strategies" can link. The thinking machine will need better ways of keeping things real. It needs to have gut feelings.

The machine does not need a real body, but it can learn to feel as if it has a body. Its body might be an airplane or a train. Or it might be a neighborhood or an entire city. Or perhaps an electrical grid or a space elevator or space station. Any of these dynamic systems could play the part of an intelligent machine's body for the purposes of grounding. A complex simulation is the most logical starting point.

At that point, the thinking machine would have ways to test its models and inferences against "real" consequences.

Labels: , ,

Bookmark and Share

3 Comments:

Blogger Michael Anissimov said...

I generally agree, but you probably could have said this is one or two sentences, i.e., "Lakoff's work on body-based metaphors is probably applicable to AI."

Wednesday, 09 September, 2009  
Blogger al fin said...

True.

But unfortunately, most people are not familiar with Lakoff and Johnson's work on body-based metaphors.

Lakoff and Johnson have not pursued the matter to the pre-linguistic level, where it is most pertinent to the AI effort.

The centrality of pre-linguistic metaphor to subconscious inference, induction, and model-building needs much more emphasis.

Wednesday, 09 September, 2009  
Blogger BrianSJ said...

There was once a brief seminar with the title "Can a machine be intelligent if it doesn't give a damn?"
The implications of AI's Cartesian approach are fundamental. Antonio Damasio's 'Descarte's Error' is only part of the problem for the brain as a computer.

Wednesday, 16 September, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``