Debating a Near-Term Singularity: Kurzweil vs. Allen
When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThinkIn the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.
Microsoft co-founder and billionaire Paul Allen recently expressed skepticism about Kurzweil's timeline for the singularity, in a Technology Review article.
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.
While we suppose this kind of singularity might one day occur, we don't think it is near.
...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen
Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!
But now, Ray Kurzweil has chosen the same forum to respond to Paul Allen's objections:
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile way.
...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.
...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.
...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.
...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil
Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere.
This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionistic analyses and solutions of previous problems.
Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from the humans who eventually birth the singularity.
Written originally for Al Fin, the Next Level
Labels: artificial intelligence, Singularity
5 Comments:
You get the sense that Kurzweil NEEDS this to be true since he's approaching his own death. He's on some wacky "life extension" plan his quack put him on. He takes a bunch of expensive supplements and does a lot of low intensity cardio (hip/spine fracture: here he comes!). He video conferences with this doctor regularly.
The problem I have with this is that his perspective reaches all the way back to, well, to his MIT undergrad days. I'd like to take a wild stab at this and say that short term exponential increases in anything oscillate around a longer term trend that is nowhere near exponential. Which trend is more indicative of the true advance toward the singularity? Besides, in this economic/ financial climate, we may be lucky to be able to use pencils and paper for computing by 2020 let alone some yet-to-be-built machine a billion times faster than my current PC.
Kurxweil's extrapolations of current trends may be wrong.
The trend may suddenly slow or stop.
On the other hand there is an equal theoretical and perhaps practical chance that any change will be acceleration.
All predictions are likely to be wrong but extrapolation of trend is less likely to be spectacularly wrong than any other assumption.
Amazingly, this whole argument is based on an issue that possibly, and probably, will prove irrelevant to the emergence (or not) of a Technological Singularity.
See:
"My Logic is Better than Paul Allen's"
http://goo.gl/lWock
I have been searching for some serious counter arguments to Kurzweil's LOAR for a few weeks now. For the arguments presented above, I honestly don't see anything there that really challenges it. It's always some version of "the problems are too hard or too complex, therefore it can never be understood, never be reverse engineered and never constructed". To compound this, the only participant in the debate attempting to quantify the complexity of the human brain in some way is Kurzweil! Look, Kurzweil has developed a quantitative logical argument here by identifying a highly regular long standing trend. You can't refute something like that with subjective hand waving while exclaiming "it's just too good too soon (or too awful) to be true".
"Kurzweil critics", if you want to further the discussion, you need to provide a criticism along one of these two avenues: 1 - Either demonstrate that Kurzweil has incorrectly assessed the complexity of the human brain (all of this information by the way - is compressed within the DNA to be smaller than an typical 5 minute MP3). This would then push back the forcasted dates along the established LOAR trajectory - OR - 2 - Show that the trend identified by Kurzweil as the LOAR will necessarily cease before we arrive at human intelligence (or some combination of these two). For the second case, you'd require an argument for why integrated circuits are the best computers humans will be able to build prior to 2030. When all you have is an inductive argument of a few exponential growth examples where seemingly impossible things have happened in the past (internet, cell network, human genome... he's got lots of them) - it's still the most rational course of action to expect a clearly identified trend to continue than not.
Counter arguments that point to the complexity and difficulty of the task, or the apparent ineffectiveness of our current capabilities without actually addressing the exponential nature of the progress as argued by Kurzweil do not advance the discussion. All you are really doing is arguing that the linear view is correct without providing a valid reason for your claim. This is why Kurzweil is forced to resort to mashing the "exponential" soundboard key over and over again. Just get over it people - seriously. Look - space-time is relative to one's inertial and gravitational curvature, you can't measure the exact position of tiny things while also knowning their momentum, there really is quantum spooky action / entanglement at a distance - it's just the way things are! After all that, you'd think that accepting computer technology to be advancing exponentially wouldn't be so hard to swallow. Can we all please just agree to accept this trend or provide valid premises to reject it and actually move forward with the discussion!? The capability of the human brain (however much we actually use) lies at some point along this curve. So, unless you can refute the trend, or the location of that point, then you are logically required to accept the argument (within appropriate error margins based on the variance of the data from the trend identified).
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home