07 December 2009

Artificial Intelligence to Receive a Make-Over

The field of artificial-intelligence research (AI), founded more than 50 years ago, seems to many researchers to have spent much of that time wandering in the wilderness, swapping hugely ambitious goals for a relatively modest set of actual accomplishments. Now, some of the pioneers of the field, joined by later generations of thinkers, are gearing up for a massive “do-over” of the whole idea.

This time, they are determined to get it right — and, with the advantages of hindsight, experience, the rapid growth of new technologies and insights from the new field of computational neuroscience, they think they have a good shot at it. _MachinesLikeUs
As brilliant as many computer scientists and electrical engineers may be, they tend not to have a clue as to how the brain does what it does. Jeff Hawkins of Numenta is one of the few exceptions -- an electronics tech whiz who has taken the time to learn a lot about the brain.

Now, a multi-disciplinary team at MIT is determined to make up for the past 50 years of myopic approaches to AI. They plan to approach the problem with a new set of assumptions based upon more validated notions of brain, intelligence, consciousness, and mind.
there are three specific areas — having to do with the mind, memory, and the body — where AI research has become stuck, and each of these will be addressed in specific ways by the new project

The first of these areas, he says, is the nature of the mind: “how do you model thought?” In AI research to date, he says, “what’s been missing is an ecology of models, a system that can solve problems in many ways,” as the mind does.

Part of this difficulty comes from the very nature of the human mind, evolved over billions of years as a complex mix of different functions and systems. “The pieces are very disparate; they’re not necessarily built in a compatible way,” Gershenfeld says. “There’s a similar pattern in AI research. There are lots of pieces that work well to solve some particular problem, and people have tried to fit everything into one of these.” Instead, he says, what’s needed are ways to “make systems made up of lots of pieces” that work together like the different elements of the mind. “Instead of searching for silver bullets, we’re looking at a range of models, trying to integrate them and aggregate them,” he says.

The second area of focus is memory. Much work in AI has tried to impose an artificial consistency of systems and rules on the messy, complex nature of human thought and memory. “It’s now possible to accumulate the whole life experience of a person, and then reason using these data sets which are full of ambiguities and inconsistencies. That’s how we function — we don’t reason with precise truths,” he says. Computers need to learn “ways to reason that work with, rather than avoid, ambiguity and inconsistency.”

And the third focus of the new research has to do with what they describe as “body”: “Computer science and physical science diverged decades ago,” Gershenfeld says. Computers are programmed by writing a sequence of lines of code, but “the mind doesn’t work that way. In the mind, everything happens everywhere all the time.” A new approach to programming, called RALA (for reconfigurable asynchronous logic automata) attempts to “re-implement all of computer science on a base that looks like physics,” he says, representing computations “in a way that has physical units of time and space, so the description of the system aligns with the system it represents.” This could lead to making computers that “run with the fine-grained parallelism the brain uses,” he says.

...Harvard (and former MIT) cognitive psychologist Steven Pinker says that it’s that kind of big picture thinking that has been sorely lacking in AI research in recent years. Since the 1980s, he says “there was far more focus on getting software products to market, regardless of whether they instantiated interesting principles of intelligent systems that could also illuminate the human mind. This was a real shame, in my mind, because cognitive psychologists (my people) are largely atheoretical lab nerds, linguists are narrowly focused on their own theoretical paradigms, and philosophers of mind are largely uninterested in mechanism.

“The fading of theoretical AI has led to a paucity of theory in the sciences of mind,” Pinker says. “I hope that this new movement brings it back.”

Boyden agrees that the time is ripe for revisiting these big questions, because there have been so many advances in the various fields that contribute to artificial intelligence. “Certainly the ability to image the neurological system and to perturb the neurological system has made great advances in the last few years. And computers have advanced so much — there are supercomputers for a few thousand dollars now that can do a trillion operations per second.” _MachinesLikeUs
One of the biggest problems in AI, according to Al Fin cognitive scientists, is that the personalities of the researchers themselves tends to get in the way of clear thinking about the underlying problem.

The central problems of AI reside within a nearly unapproachable sphere of integrated to interacting chained energies. Ranging from the sub-atomic to the sociological in scale, no one individual is capable of encompassing the sphere. But it is the multi-level mapping of these energies from the electron to the neuron to the cortex / sub-cortex / entire brain / mine / and body that is currently so vague and ill defined.

What do we want from AI? We really do not need humanoid robots walking around getting in our way. The MIT teams first project sounds reasonable:
One of the projects being developed by the group is a form of assistive technology they call a brain co-processor. This system, also referred to as a cognitive assistive system, would initially be aimed at people suffering from cognitive disorders such as Alzheimer’s disease. The concept is that it would monitor people’s activities and brain functions, determine when they needed help, and provide exactly the right bit of helpful information — for example, the name of a person who just entered the room, and information about when the patient last saw that person — at just the right time.
Such a device would be a useful form of brain-compatible AI, and a stepping-stone to more powerful devices capable of thinking on their own. The ability to create such an intermediate device would also serve as an assessment of progress in brain / mind compatible AI.


Bookmark and Share


Blogger Bruce Hall said...

... speaking of the Obama administration... it's about time their AI was made over.

Monday, 07 December, 2009  
Blogger al fin said...

Too late for that now, Bruce. Perhaps if one had started 40 years ago . . .

Tuesday, 08 December, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts