Intelligence is Not an Algorithm
Algorithms involve several forms of abstraction. First, an algorithm consists of clear specifications for what should be performed in each step, but not necessarily clear specifications for how. In essence, an algorithm takes a problem specifying what should be achieved and breaks it into smaller problems with simpler requirements for what should be achieved. An algorithm should specify steps simple enough that what becomes identical to how as far as the person or machine executing the algorithm is concerned. _NewAtlantisAnyone who has programmed computers understands the central role of algorithms to modern computing. An algorithm is a sequence (often involving iteration and/or recursion) of steps for converting an input into a useful output. Coming up with just the right algorithm to do a job often requires a high level of intelligence. Intelligence --> Good algorithm. Unfortunately, it does not work in reverse:
Properly understood, the first question underlying the AI debate is: Can the properties of the mind be completely described on their own terms as an algorithm? Recall that an algorithm has a definite start and end state and consists of a set of well-defined rules for transitioning from start state to end state. As we have already seen, it was the explicit early claim of AI proponents that the answer to this question was yes: the properties of the mind, they believed, could be expressed algorithmically (or “procedurally,” to use a more general term). But the AI project has thus far failed to prove this answer, and AI researchers seem to have understood this failure without acknowledging it.Be sure to read the rest at the link above. I have been interested in artificial intelligence since the early 1990s, and was initially quite optimistic about the prospects for the creation of an intelligent machine brain. Over the years, as I learned more about both machines and the brain, I have modified my expectations quite significantly.
...Once the unlikelihood of procedurally describing the mind at a high level is accepted, the issue becomes whether the mind can be replicated at some lower level in order to recreate the high level, raising the next important question: Are the layers of physical systems, and thus the layers of the mind and brain, separable in the same way as the layers of the computer?...it is correct to explain computers in terms of separable layers, since that is how they are designed. Physical systems, on the other hand, are not designed at all. They exist prior to human intent; we separate them into layers as a method of understanding their behavior. Psychology and physics, for example, can each be used to answer a distinct set of questions about a single physical system—the brain. We rely on hierarchies to explain physical systems, but we actually engineer hierarchies into computers.
...What is the basic functional unit of the mind? If the mind were a computer, it would be possible to completely describe its behavior as a procedure. That procedure would have to use certain basic operations, which are executed by some functional unit of the brain. The early hypothesis of AI was that this question was essentially irrelevant since the mind’s operations could be described at such a high level that the underlying hardware was inconsequential. Researchers today eschew such a large-scale approach, instead working under the assumption that the mind, like a computer program, might be a collection of modules, and so we can replicate the modules and eventually piece them back together—which is why research projects today focus on very specific subsystems of intelligence and learning.
...The unit of the mind typically targeted for replication is the neuron, and the assumption has thus been that the neuron is the basic functional unit of the mind. The neuron has been considered a prime candidate because it appears to have much in common with modules of computer systems: It has an electrical signal, and it has input and output for that signal in the form of dendrites and axons. It seems like it would be relatively straightforward, then, to replicate the neuron’s input-output function on a computer, scan the electrical signals of a person’s brain, and boot up that person’s mind on a suitably powerful computer.
...Every indication is that, rather than a neatly separable hierarchy like a computer, the mind is a tangled hierarchy of organization and causation. Changes in the mind cause changes in the brain, and vice versa. To successfully replicate the brain in order to simulate the mind, it will be necessary to replicate every level of the brain that affects and is affected by the mind.
...The fact that the mind is a machine just as much as anything else in the universe is a machine tells us nothing interesting about the mind. If the strong AI project is to be redefined as the task of duplicating the mind at a very low level, it may indeed prove possible—but the result will be something far short of the original goal of AI.
If we achieve artificial intelligence without really understanding anything about intelligence itself—without separating it into layers, decomposing it into modules and subsystems—then we will have no idea how to control it. We will not be able to shape it, improve upon it, or apply it to anything useful. Without having understood and replicated specific mental properties on their own terms, we will not be able to abstract them away—or, as the transhumanists hope, to digitize abilities and experiences, and thus upload and download them in order to transfer our consciousnesses into virtual worlds and enter into the minds of others. _NewAtlantis_via_SimoleonSense
I have become far more interested in questions of human cognition -- and how cognition in humans might be improved -- than in the development of intelligent machines. Questions having to do with the difference between rationality and intelligence, or understanding what makes super-intelligent people tick, are far more engaging to me than hearing about which robot was best able to imitate human facial expression or body movement.
Needless to say, every aspect of human intelligence, from the genetic, tothe molecular, to the neuronal, to the cortical columnar level and up, will come under scrutiny repeatedly in the quest for a fuller understanding of how the mind works -- and how it might be made to work better.
You can be sure that politicians and their enablers want to know how to pull your strings. Some of us, on the other hand, would rather everyone learn to pull his own strings -- and what it means to do so.
Intelligence is not an algorithm, but good algorithms will prove extremely helpful in the quest for intelligence -- and everything complex that we attempt. It is likely that most of us will eventually carry algorithmic co-processors that function as cognitive aids and augments. We will probably find it difficult to function at our highest levels without them.