Singularity Summit in San Francisco 8,9 September 2007
Michael at Accelerating Future announces several upcoming Singularity Events. One of these is the Singularity Summit II: AI and the Future of Humanity, September 8,9 at the Palace of Fine Arts Theatre in San Francisco. Go here for photos and bios of the speakers.
The Singularity movement is fascinating, but it is not new. Long before Kurzweil, there was Timothy Leary's SMI2LE, and before Leary there was Teilhard de Chardin's Omega Point, and so on and on.
Here is a quote from the overview page of the Summit, attributed to Kurzweil:
Notice how similar to a religious (or quasi-religious) conversion experience this sounds. It is described as an external event that forces a psychological discontinuity on intelligent beings.
Although it will be difficult to "drop out" of the Singularity, it is inevitable that many people will try. Certainly North Korea's Kim will try, and many muslim groups will also try to avoid the singularity tsunami.
I strongly suspect that most "connected" humans will adjust their time frames, or internal clock speeds, to compensate somewhat for the accelerating change.
There is always a time lag between the inventive idea and the real product of change. That time lag has been reduced by advanced telecom and powerful software/hardware tools. Once high speed personal prototypers are widely available, and particularly once molecular manufacturing is available, the time lag will shrink even more.
Many singularitarians believe that better-than-human-brain machine intelligences will be necessary before the pace of technological change becomes breathless enough to be called "the singularity." More likely to my mind is that more hyper-specialised machines and software will be developed that will act as "augments" to particular areas of research and development. These specialised machines will not be intelligent in the general sense, any more than championship chess playing computers are intelligent outside their narrowly designed and programmed specialty.
When very clever designs for hardware are combined with well crafted software and specialised physical actuators, machines can be an enormous help without having general intelligence.
The idea that all that is needed is faster processors or more clever algorithms is more probably slowing down the actual achievement of general machine intelligence. It is called chasing the wild goose.
It is clear to me that the modularity of the human brain will have to be imitated in hardware, before humans can build anything that possesses human-like intelligence. It is also clear to me that going from a chimp-like intelligence to a human-like intelligence to a superhuman-like intelligence will not be an almost instant process.
The actual hardware involved will be far more complex than any "thinking machines" that humans have devised up until now. Trying to achieve this with Von Neumann style digital computers is a laughable proposition, in my opinion. More later.
The Singularity movement is fascinating, but it is not new. Long before Kurzweil, there was Timothy Leary's SMI2LE, and before Leary there was Teilhard de Chardin's Omega Point, and so on and on.
Here is a quote from the overview page of the Summit, attributed to Kurzweil:
this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life.
Notice how similar to a religious (or quasi-religious) conversion experience this sounds. It is described as an external event that forces a psychological discontinuity on intelligent beings.
Although it will be difficult to "drop out" of the Singularity, it is inevitable that many people will try. Certainly North Korea's Kim will try, and many muslim groups will also try to avoid the singularity tsunami.
I strongly suspect that most "connected" humans will adjust their time frames, or internal clock speeds, to compensate somewhat for the accelerating change.
There is always a time lag between the inventive idea and the real product of change. That time lag has been reduced by advanced telecom and powerful software/hardware tools. Once high speed personal prototypers are widely available, and particularly once molecular manufacturing is available, the time lag will shrink even more.
Many singularitarians believe that better-than-human-brain machine intelligences will be necessary before the pace of technological change becomes breathless enough to be called "the singularity." More likely to my mind is that more hyper-specialised machines and software will be developed that will act as "augments" to particular areas of research and development. These specialised machines will not be intelligent in the general sense, any more than championship chess playing computers are intelligent outside their narrowly designed and programmed specialty.
When very clever designs for hardware are combined with well crafted software and specialised physical actuators, machines can be an enormous help without having general intelligence.
The idea that all that is needed is faster processors or more clever algorithms is more probably slowing down the actual achievement of general machine intelligence. It is called chasing the wild goose.
It is clear to me that the modularity of the human brain will have to be imitated in hardware, before humans can build anything that possesses human-like intelligence. It is also clear to me that going from a chimp-like intelligence to a human-like intelligence to a superhuman-like intelligence will not be an almost instant process.
The actual hardware involved will be far more complex than any "thinking machines" that humans have devised up until now. Trying to achieve this with Von Neumann style digital computers is a laughable proposition, in my opinion. More later.
Labels: Singularity
4 Comments:
The current Singularity movement is new, basically founded by Eliezer Yudkowsky in 2000 with the publication of the Singularitarian Principles. Kurzweil is not really promoting a Singularity movement per se. The Yudkowskian movement centers around creating seed AI.
Kurzweil portrays things quasi-religiously because he thinks people like it. SIAI-involved Singularitarians are remarkably deadpan about the "spiritual" aspects - we are mainly focused on the immediacy of ensuring human-friendly SI is produced in advance of human-indifferent SI.
No need for a tsunami. A Singularity can be quite continuous-seeming. A SI can emerge, acquire extremely powerful technology, and only use it to change the "background rules", like removal of death or disease.
Regarding takeoff speed, have you read the last part of LOGI?
I agree with you that increased computation will not automatically produce AI: Kurzweil and Moravec are the only ones who imply it might, most other AGI enthusiasts never do.
The von Nuemann digital hardware can do anything parallel computing can do, as long as you have enough of it. With molecular computers, sufficient quantities of computing power would be available.
In particular with reference to the von Neumann point... incidentally, I find it laughable to think that something more than traditional digital computers would be necessary to implement intelligence.
In my experience so far, the idea that von Neumann computing is not sufficient is connected to the idea that intelligence is not merely a tool box of specialized algorithms interoperating efficiently, but a "mystic process" that requires "something more" than "mere von Neumann computers". If there is no dualistic division between intelligence and complex software, then why should special computing be required? To postulate that it is is merely to continue a centuries-long tradition of human exceptionalism.
In your future post on computing requirements for intelligence, explain whether or not a shrimp brain can be implemented on von Neumann computing, and note that the fundamental neurophysiological underpinnings of a shrimp brain and a human brain are not much different. It is the arrangement that produces intelligence.
Human brains and human behaviour are much more complex than shrimp brains and shrimp behaviour.
Sure, there are similarities at the molecular, biochemical, and physiological scales. But when you get to higher levels of complexity the brains and behaviours diverge.
Von Neumann himself, like the early AI pioneers, believed the problem of human consciousness could be solved in a matter of decades if not years. Even had he not died of cancer, that particular goal would not have been achieved.
The problem is one of complexity. As fine a mathematician as JVN was, he could not overcome the time complexity problem of computing. While a VN computer may be able to solve a particular problem, if it is not solved within the projected remaining age of the universe, it does no one any good.
More later.
One way to "design" an intelligent machine is evolutionary design.
Because this method of design allows for large portions of serendipitous discovery, it is a viable approach for designing complex systems that humans have little idea how to build.
This type of approach is very important in nano-design as well, given all the certain unexpected problems that will arise in that almost completely unexplored realm.
Von Neumann digital computers will be invaluable tools in the design/test/re-design cycle--until we come up with better designs, of course.
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home