01 April 2009

Is the Singularity Near? It All Depends Upon Which Singularity You are Talking About

If you are counting on being bailed out of the "human stupidity quagmire" by a super-intelligent machine, you may have to wait a very long time. A recent New York Times Guest Column looks at the difference between brains and computers, suggesting a reason or two why Ray Kurzweil's time projections to the "singularity" may be wildly wrong.
engineers could learn a thing or two from brain strategies. For example, even the most advanced computers have difficulty telling a dog from a cat, something that can be done at a glance by a toddler — or a cat. We use emotions, the brain’s steersman, to assign value to our experiences and to future possibilities, often allowing us to evaluate potential outcomes efficiently and rapidly when information is uncertain. In general, we bring an extraordinary amount of background information to bear on seemingly simple tasks, allowing us to make inferences that are difficult for machines. _NYT
There is also the problem of power consumption. Since computer power consumption rises explosively with processing speed, a Kurzweilian 2025 "human-equivalent computer" would require a gigawatt of power to function. Imagine if every human being required the dedicated power output of a full-sized nuclear fission reactor, just to function normally!

A very different singularity -- the biosingularity -- is being approached at breakneck acceleration by bioresearchers. For example, researchers can now observe individual cells' responses to various stimuli in real time, using optical microscopy. An immense amount of data will be generated from observation of a mere cell. So don't throw away those computers. We will need them to store, correlate, and make sense of the immense quantity of data being generated.

Here are some more dam-busters:

Mitochondria play a much larger role in normal life processes than we thought.

Near real-time PCR .

Monitoring the epigenetic status of all body tissues .

Better ways to gauge gene expression .

....And many more 2nd order and higher improvements in bio-research.

But don't give up on your dreams of human-equivalent and better machine intelligence. It will happen. First we will need to better understand the "proof of concept" of intelligent machines -- our brains. Researchers working separately at Hebrew University and at Karolinska Institute, are taking different, yet promising approaches to brain modeling and simulation, hoping for breakthroughs in understanding that might lead to bigger breakthroughs, and so on ...

The Biosingularity may just hit us a decade or two before the AI Singularity. That might give us enough time to prepare -- to augment ourselves well enough to survive the emergence of intelligent machines. If we are ready for them when they come, we are more likely to arrive at an amiable understanding with each other.

Labels: , ,

Bookmark and Share

3 Comments:

Blogger yamahaeleven said...

Arguments on the advent of super intelligence are amusing, so much polarization! One person says within a couple of decades, and here's proof, and another says not in a million years, and here's proof.

Some seem to forget a system will only need to be sufficiently intelligent at the start, not perfect right out of the box, and certainly not a carbon copy of how our brains do the job. It will be a hack!

From my semi-sideline perch, I find this all very fascinating. One by one, human activities that were once considered impossible for computation to achieve fluency have been mastered. A double Moore's law seems to apply. Initially, it takes much brute force hardware and software. Inexorably, the hardware requirements diminish as the algorithms improve. Seems unlikely that this process would suddenly not apply to the broad realm of intelligence, particularly when these steps become suitably automated...

Wednesday, 01 April, 2009  
Blogger al fin said...

Computers are good with games that have well defined rules. Chess, Go, Checkers, Math, etc.

But of course, they were made and programmed by humans, and draw electrical power generated and transported by human-made machines and grids.

The limits of algorithms are too often ignored by the Kurzweilians. Algorithmic development does not follow Moore's law, and intelligence is not algorithmic -- which is a source of great confusion to many AI people. A confusion of logical categories, as it were.

It is far easier to use terms such as "the broad realm of intelligence" than to actually understand what such a realm might encompass.

Wednesday, 01 April, 2009  
Blogger kurt9 said...

Very good point. The biosingularity, unlike the much ballyhooed AI singularity, is very real and is likely to happen in the next 2-3 decades. At least it will give us radical life extension which, in turn, gives us all the time in the world to work on all of the other stuff we want.

Wednesday, 01 April, 2009  

Post a Comment

“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell

<< Home

Newer Posts Older Posts
``