19 January 2010

Artificial Female Intuition: The Mind of Monica

The field of artificial intelligence has suffered from an excess of reductionist, linear male logic. It could likely stand a strong dose of non-linear, chaotic female intuition to put it closer to fruition. Monica Anderson may be exactly the kind of female that the field of machine intelligence needs right now.

Brian Wang summarized a presentation by Monica Anderson from the recent Foresight 2010 conference. (see Anderson's presentation video 49:34)  Brian's brief glimpse of Ms. Anderson's approach to "Artificial Intuition" was interesting enough to lead to the Syntience.com website, and to the intriguing Artificial Intuition website.
Most humans have not been taught logical thinking, but most humans are still intelligent. Most of our daily actions such as walking, talking, and understanding the world are based on Intuition, not Logic.

I will attempt to show that it is implausible that the brain should be based on Logic. I believe Intelligence emerges from millions of nested micro-intuitions, and that true Artificial Intelligence requires Artificial Intuition.

Intuition is surprisingly easy to implement in computers, but requires a lot of memory....

Computer-based intuition - "Artificial Intuition" - is quite straightforward to implement, but requires computers (a recent invention) with a lot of memory (only recently available cheaply enough). These methods were simply unthinkable at the time AI got started, not to mention at the time we discovered the power of Logic and the Scientific Method. The tendency to continue down a chosen path may have delayed the discovery of Artificial Intuition by a few years. _ArtificialIntuition
Anderson goes on for several pages elaborating on the concept of artificial intuition, and how the concept could compensate for the deep and deadly deficiencies of a primarily logic-based approach to artificial intelligence.   It is worth reading for anyone interested in why artificial intelligence has been so slow to develop.

Here are Monica Anderson's videos on Vimeo.com

Ms. Anderson's ideas go in the right direction, generally. I recommend reading the Artificial Intuition website. It is likely that she underestimates the difficulty of implementing intuitive operations into machines -- at all the multiple levels they are suspected of operating. She refers to "artificial intuition" as an algorithm, which is unlikely to be accurate on two counts: it is unlikely that "intuition" can be implemented for multiple levels using only a single "algorithm", and in fact it is unlikely that machine implemented intuition will be readily recognised as an "algorithm" by most practitioners of computer science.


Bookmark and Share


Blogger Bruce Hall said...

It would seem that AI has a daunting task to incorporate the various levels of brain functioning that lead to interaction with the external world.

- reflexive
- non-linguistic
- reflective
- associative
- linguistic

and so on.

Terms such as intuitive are basically vacuous, in my opinion. It's akin to saying "god" when dealing with the mysteries of the universe. It's like describing intelligence as the "MacGyver Effect" when one is looking at the results of the thought process, but not the underlying dynamics.

It's great that computers can play chess given a discrete set of rules, but the "intuitive" side might take a little more programming.

Wednesday, 20 January, 2010  
Blogger al fin said...


Anderson says that programming intuition will take a lot of memory, but is conceptually easy. That assertion is impossible to swallow, but most of her ideas are useful in defining many of the problems that AI faces.

Most AI people have charged gung-ho into the fray without really understanding the challenge.

Wednesday, 20 January, 2010  
Blogger Monica Anderson said...


I addressed your points in my talk. You can watch an earlier recording of essentially the same talk at http://videos.syntience.com (the first one on the page). Look for the slide that discusses "The Mundane" as not being part of either "The Rational" or "The Mystical".

Enumerating the different functions of the brain, as you do, is a typical Reductionist approach. That's not what we're about. We're building electric grey matter in bulk. With few exceptions, all parts of the brain are made of similar stuff. Artificial Intuition is basically a very powerful machine learning algorithm that can learn languages (any language) using unsupervised training.


Programming the behavior of one neuron (or one synapse) is easy. Not much code, some fraction of our total of 15,000 lines. Then we instantiate a few million of them as the program is reading Jane Austen, trying to learn English. We use up 32 GB of RAM in less than two chapters. There's no conflict here.

Oh, and I have 20+ years of experience with Industrial grade AI. Check my resume. I charged into AI gung-ho in 1980 and fell into the same "logical/reductionist AI" trap as everyone else. This is my second, carefully calculated entry (full time AI researcher since 2001, except for two years at Google).

- Monica Anderson

Wednesday, 20 January, 2010  
Blogger Dave said...

How do you teach a computer English by making it read Jane Austen? Could I give you a book in Chinese and have you learn Chinese by reading it? People understand language because words have meanings for them relative to other experiences.

Wednesday, 20 January, 2010  
Blogger Monica Anderson said...

The same way children learn their first language. By osmosis, immersion, and by discovery. We don't give them grammars or lists of words. The number of words parents teach children is but a small fraction of all the words they pick up on their own.

You could in fact give me a book in a foreign language and I'd learn something of the language from it. For starters, I'd be able to discover typos and grammatical errors in text I'd never seen before after sufficient exposure from books with correct language. This counts as knowledge of the language and if computers could do this well, then that would be worth a lot of money. Yes, Chinese is very different from western languages so I'd have more trouble with it than, say, Estonian or Dutch.

Your last sentence expresses what's called the Embodiment Hypothesis. I don't accept it at face value; the issue is complex. But intelligence comes in many forms and just because this one makes sense to us as humans doesn't necessarily make it true for every system. Yes, we are using our system to learn languages from text. If we had used it to teach a robot walk, then nobody would have protested. But the tasks are amazingly similar at the neuron level. It's neurons connecting to other neurons using synapses, trying to get better at their task, whatever it is - walking or talking.

Wednesday, 20 January, 2010  
Anonymous Anonymous said...

Monica, your comment above makes me think that you are attempting to build HAL from 2001: A Space Odyssey. At the end of 2001 as Dave Bowman is disabling HAL, HAL begins to reminisce about its early training and begins to sing the song Daisy.

Maybe Jane Austen is too hard for for an electronic infant.

Wednesday, 20 January, 2010  
Blogger Monica Anderson said...

I never thought of that. I always assumed HAL was a thoroughly Reductionist contraption. But you are right, the fact that he was taught to sing "Daisy Belle" indicates that he was in fact based on Intuition.

HAL could not handle the conflicts caused by knowing the true purpose of the mission and went insane. I used to attribute that to Reductionist brittleness but since that kind of madness could happen to people too, maybe I was wrong.

Something to think about. Thanks.

About "being too hard" - We don't censor what newborn children are exposed to. They get it full throttle - Shapes, colors, sounds, movement, proprioception, muscle movement, language, abstractions. It's "figure it out, or die". It takes a while, but most of us get it.

- Monica

Wednesday, 20 January, 2010  
Blogger al fin said...

Monica: Thanks for your comments. I enjoyed your discussion at the "Artificial Intuition" website very much. I also enjoyed your video posted above. You point out some crucial problems in the efforts to create AI up to this point.

I had already looked at your resume before you suggested I do so, and I will willingly concede to your background in computer science.

We are living in a multi-disciplinary world, where no one discipline is likely to discover a magic key to machine cognition. A background in computer science, with AI experience, is not likely to provide you with all the pieces you will need.

I wish you good luck in finding the right collaborators who can bring some of the missing puzzle pieces to the effort.

As you say, even modest artificially intuitive implementations can be worth a lot of money.

Wednesday, 20 January, 2010  
Blogger Monica Anderson said...

I fully agree with your statement about a multi-disciplinary world. Nobody should ever enter the field of AI without a thorough grounding in Epistemology, some basis in Neuroscience, Linguistics, Philosophy of Mind, Philosophy of Science, Selectionism, and (yes) Logic. Knowing a couple languages helps if languages and/or learning is going to be your focus.

Knowing programming is almost a negative, since it implies the student may already have adopted a Reductionist stance. Programming is the most Reductionist profession there is. Intelligence is emergent and holistic, and therefore one of the main problems with AI is that it has been done by programmers.

About collaborators and supporters: I'm at the point where they find me; I no longer have to look for them. And converting hardline Reductionists has never worked for me. Not once.

Wednesday, 20 January, 2010  
Anonymous Anonymous said...

Monica, we do limit what we expose children to in the educational system. First children are taught letters, then the sounds of those letters, then the children begin to sound out simple words. Hal's training began with learning to sing a simple song, long before he was ever exposed to Jane Austen.

I think that children start the same way. First children begin to associate sound with people, then they begin to recognize words, and then associate emotional states with words, then meanings.

I wasn't trying to imply that HAL was intuitive, it's just that your work reminded me of one of my favorite movies.

Wednesday, 20 January, 2010  
Blogger Monica Anderson said...


Consider the skills of learning to see - to understand that objects and people are stable and recurring entities worth identifying and remembering. We don't particularly throttle what we allow newborns to see. Same for spoken language. Children in school already know spoken language.

What we do in the standard educational system might not even be optimal. How does Montessori teach reading and writing?

I like 2001 and I like the idea that HAL was intuition-based a lot. I think I'll use that in future talks. But I better watch the movie first to find other examples beside the song. HAL was creepy - "uncanny" - exactly because he was so close to human.

Wednesday, 20 January, 2010  
Anonymous Anonymous said...

Hal was intended to be as human as possible, and eventually he made the jump to fully human - he committed an act of murder.

I think children block out information and work on one type of recognition at a time.

First children learn the difference between light and dark, and suppress the recognition of other information.

Then children begin to recognize shapes in the light, and ignore all other information during this step.

Then children distinguish between objects and living thing, and ignore other info...

I think the reason this appears intuitive is that the children progress through each discrete step so fast that we usually don't see the differences between the steps.

As for your criticism of the teaching method I mentioned, it is called phonics, and it hasn't been used in the US since the mid-1980's. Nowadays schools use what is called the whole language method, whereby children are given simple books and taught to associate the sound of the whole word with the script representing it. The WL method is a failed method, as I observed amongst my classmates in high school. I was taught to read using the phonics method, whereas my classmates were probably taught using the WL method. When it cam time to read a passage before the class most of my classmates, despite being literate, would stumble over pronouncing words in class. They could read to themselves just fine, but reading out loud in front of a small class seemed to befuddle them. One of my classmates I knew currently has masters degree in some form of lab-intensive biology, but always had problems reading in front of the class.

Anyway, I think you should start simple with your electronic children, you should sing to them.

Maybe you could even sing Daisy...

Wednesday, 20 January, 2010  
Blogger Dave said...

Monica, don't get me wrong - I think you're on to something. However, I'm not convinced that any amount of reading in any completely foreign language will give you, or any AI, understanding of it. You need to give it the ability to form associations, and not just symbolic ones of the type you'd be able to create by reading a lot. It needs to be able to associate words with the things they represent. How could something understand English but never have experienced any meanings of the words? How can something talk about a cat without ever having seen, heard, touched, tasted, or smelled one?

You're memory usage is exploding because the AI system has no way of compressing any of the information, because it can't associate things. If you want it to learn, make it watch TV at least, so it will be able to associate concepts and form hierarchies.

Thursday, 21 January, 2010  
Blogger Monica Anderson said...

This is the Embodiment Argument. It's nowhere near settled. Look it up.

How can we build nuclear power plants when nobody's ever seen, touched, or smelled a proton? How come you know things about Paris although you've never been there? You read about these things in books and they make sense to you. Having multiple senses is a help but not strictly necessary. Blind people learn spoken language. Believing otherwise may well be an ethnocentric fallacy.

Thursday, 21 January, 2010  
Blogger Dave said...

We can build power plants because, although no one has ever seen a proton directly, we have seen models of protons. If a computer has seen a model of a cat, that works fine for me.

I read things in books and they make sense to me cause I associate the words with things I've already seen and build a model of what I think Paris is in my head.

This has less to do with senses than it does with experience. It happens that we experience things through the senses. Blind people learn spoken language because they experience things through their other senses. Helen Keller learned what the world looked like through touch, built a model in her head, associated words to describe that model, and then learned how to speak.

If what you are saying is true then codebreakers could simply read a lot of an enemy's code to figure out what it says.

Trust me, I hope your idea works, but I just think you are leaving a lot of evidence about how the only intelligent thing in history every developed on the table.

Thursday, 21 January, 2010  
Blogger al fin said...


Al Fin cognitive scientists assure me that you make some good points.

But understand that Monica is not in the business merely for her health. She intends to produce a range of products that earn a healthy income. She works with collaborators who may well bring in some of the ideas you discuss.

It is not in her best interest to reveal all of her working concepts to the competition.

The concept of embodied intelligence is crucial to human consciousness and cognition. But many persons with engineering or computer science backgrounds see cognition as symbolic at its core.

So whether or not Monica has truly closed the door on "embodied" processes to create intentional cognition, or whether she is simply being wisely silent on some aspects of her research -- we may not settle that issue here to our satisfaction.

Thursday, 21 January, 2010  
Blogger Dave said...

Thanks Al. Again, I hope she succeeds!

Thursday, 21 January, 2010  
Blogger Monica Anderson said...

The video of my talk at the Bay Area Future Salon on Thursday Jan 21 2010 is now available at our video site http://videos.syntience.com

The title is "Science Beyond Reductionism". Most of the material discussed is different from the first video quoted on this page.

In the talk, I used the NetFlix challenge as a vehicle to demonstrate how Model Free Methods could be used in a concrete problem. Amazingly enough, an audience member had tried a version of the very solution I suggested (Collaborative Filtering, well known in the literature) and (If I understood this right) had achieved an improvement about half as good as the prize winning entry, which is not bad considering the difference in effort required. The model based (winning) entry involved tweaking 107 models.

Monday, 25 January, 2010  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts