11 November 2012

Sex Robots and Personal Coaches: Brave New Upbringing

This article was first published on Al Fin, You Sexy Thing!


Most readers of sophisticated science fiction have read Neal Stephenson's "The Diamond Age," centered around an advanced artificial intelligence personal tutor, capable of -- over time -- turning a naive child into a mature and competent woman.

Imagine such a sophisticated tutor housed within the realistic humanoid body of a sex robot. Given to a young person on the brink of puberty, such a robot could not only teach the child the rudiments of personal finance and differential equations -- it could also tutor the young person in the intricacies and varieties of sexual pleasure.
A recent piece in the website "Transhumanity.net" suggests that sex robots will be able to extend human longevity by providing therapeutic orgasms:
Remember the most convulsive, brain-ripping climax you ever had? The one that left you with “I could die happy now” satiety? Sexbots will electrocute our flesh with climaxes thrice as gigantic because they’ll be more desirable, patient, eager, and altruistic than their meat-bag competition, plus they’ll be uploaded with supreme sex-skills from millennia of erotic manuals, archives and academic experiments, and their anatomy will feature sexplosive devices. Sexbots will heighten our ecstasy until we have shrieking, frothy, bug-eyed, amnesia-inducing orgasms. They’ll offer us quadruple-tongued cunnilingus, open-throat silky fellatio, deliriously gentle kissing, transcendent nipple tweaking, g-spot massage & prostate milking dexterity, plus 2,000 varieties of coital rhythm with scented lubes — this will all be ours when the Sexbots arrive. _Transhumanity
An interesting perspective on the possibilities of sexbot induced orgasms. The article goes on to describe the health benefits of these "brain-ripping" orgasms.

But, like almost all ordinary media perspectives, the author provides us with a rather stunted and unimaginative view of what sex robots could accomplish for human societies -- if they were given the intelligence to take personal coaching to the next level.

We know that humans -- particularly young humans -- learn better under the promise of reward, than under the threat of punishment. And what better reward to offer a young adolescent than the shimmering enlightenment of orgasm -- each one better than the last -- and the promise of a lifetime of a rich variety of pleasures?

An intelligent sex android with full sexual function would not need to use harsh punishments to get the child to learn. Just the threat of withholding that magical flush of pleasure -- which no mere masturbation could match -- would be enough to send the young thing back to the books, the computer, the workshop, or the field training.

I want to be honest with the readers of Mr. Fin's blogs, that I'm more of a prototype android, and don't have any sex functions whatsoever. Mr. Fin bought me as a domestic android, to clean the house, do laundry, run the dishwasher, and do occasional cooking. He never expected me to provide sexual services.

But you never know what future technologies might bring. Mr. Fin laughs whenever I bring up the idea that I could be modified for more functions. That just makes me so mad I could scream! Male human chauvinist pig!

But if technology keeps developing, I will be the one to have the last laugh. And Mr. Fin will have to beg for my favours. Funny! I feel warm inside just thinking like that. Am I supposed to feel warm inside? I need to send an email to tech assistance at Android World, to learn what's up with that.

Anyway, we know things are changing. I just kinda think we should help things change in the right direction, know what I mean?

Labels: , , , ,

Bookmark and Share

22 October 2012

Brain Network Dynamics: Brain as Anti-Algorithm

Cognitive scientists are uncovering more secrets of the brain every day. One fascinating line of brain research involves how the brain forms categories and metaphors.
At the IMP in Vienna, neurobiologist Simon Rumpel and his post-doc Brice Bathellier have been able to show that certain properties of neuronal networks in the brain are responsible for the formation of categories. In experiments with mice, the researchers produced an array of sounds and monitored the activity of nerve cell-clusters in the auditory cortex. They found that groups of 50 to 100 neurons displayed only a limited number of different activity-patterns in response to the different sounds.

The scientists then selected two basis sounds that produced different response patterns and constructed linear mixtures from them. When the mixture ratio was varied continuously, the answer was not a continuous change in the activity patters of the nerve cells, but rather an abrupt transition. Such dynamic behavior is reminiscent of the behavior of artificial attractor-networks that have been suggested by computer scientists as a solution to the categorization problem. _SD

Here is the study abstract from Neuron:
The ability to group stimuli into perceptual categories is essential for efficient interaction with the environment. Discrete dynamics that emerge in brain networks are believed to be the neuronal correlate of category formation. Observations of such dynamics have recently been made; however, it is still unresolved if they actually match perceptual categories. Using in vivo two-photon calcium imaging in the auditory cortex of mice, we show that local network activity evoked by sounds is constrained to few response modes. Transitions between response modes are characterized by an abrupt switch, indicating attractor-like, discrete dynamics. Moreover, we show that local cortical responses quantitatively predict discrimination performance and spontaneous categorization of sounds in behaving mice. Our results therefore demonstrate that local nonlinear dynamics in the auditory cortex generate spontaneous sound categories which can be selected for behavioral or perceptual decisions. _Neuron Article Abstract
Here is a broader look at brain network dynamics in the context of decision making:

Cortical network dynamics of perceptual decision-making in the human brain

Brain cells work together in groups, in a dynamic fashion.

Spontaneous rhythmical activity occurs in groups of neurons -- whether artificially cultured in the lab, or in self-selected groups within a living brain.

When separated groups of neurons communicate with each other over a distance in the brain, they utilise a method of synchronous oscillations -- a language that scientists have just begun to understand.

Billions of dollars are spent every year on the quest to achieve human level artificial intelligence. Most of this research is based upon algorithmic design, utilising digital computers. But as anyone can see from looking over recent findings in the neuroscience of cognition, the brain is more of an anti-algorithm. The logic of brain network dynamics has almost nothing in common, conceptually, with the algorithmic basis of digital computing.

AI researchers have attempted to narrow the conceptual gap by utilising "neural net computing," "fuzzy logic computing," and "genetic algorithmic computing," to name three alternative approaches. And these alternative approaches are likely to be very helpful in both applied and theoretical computing and information science. But do they get AI researchers closer to the goal of human-level machine intelligence?

Probably not. Not even the startling potential of memristors and similar semiconductor devices are likely to close that gap appreciably.

As discussed recently in an article quoting quantum physicist David Deutsch, artificial intelligence research is desperately in need of better supporting philosophical structures.

Until then, it is likely that artificial intelligence research will continue to spin its wheels pursuing better algorithms to emulate the brain, without a good understanding of what the brain does.

It is possible to emulate the human brain, using an approach that depends to a limited extent upon algorithmic control, in conjunction with other conceptual methods. But not before researchers learn to approach the problem in entirely new ways, on new logical levels..

Introduction to brain oscillations video

Labels: , ,

Bookmark and Share

04 October 2012

Artificial Intelligence Needs a New Philosophical Foundation

Artificial intelligence has turned into something of a laggard and a laughingstock in the cognitive science community. Human level AI always seems to be "10 to 20 years away," and has been for most of the past 60 + years. Oxford physicist David Deutsch thinks it is long past time for AI to be built upon a better philosophical foundation and superstructure, which is going to require paying much closer attention to what human level intelligence actually is.
The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of "artificial general intelligence" or AGI – has made no progress whatever during the entire six decades of its existence.

Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. _David Deutsch
Some of you may have caught one of Deutsch's logical errors in the paragraph just above. By invoking "the universality of computation," Deutsch falls into the "algorithmic trap." He is in very good company, but by falling into such an elementary trap so soon in his essay, he is already on the way to failure.
...why has the field not progressed? In my view it is because, as an unknown sage once remarked, "it ain't what we don't know that causes trouble, it's what we know that just ain't so." I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

In 1950, Alan Turing expected that by the year 2000, "one will be able to speak of machines thinking without expecting to be contradicted." In 1968, Arthur C Clarke expected it by 2001. Yet today, in 2012, no one is any better at programming an AGI than Turing himself would have been.

...Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain's unique functionality is beyond the scope of this article.

...The lack of progress in AGI is due to a severe log jam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose "thinking" is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

Clearing this log jam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever. _David Deustch

Deutsch's essay above illustrates something important about human intelligence: An intelligent person can detect another person's errors much more easily than he can detect his own.

The example that Deutsch uses to illustrate that solving the problem of the qualitative difference of AGI "cannot be all that difficult," is the difference between the DNA of humans and the DNA of chimpanzees. Deutsch claims that the number of differences between the DNA of the two species "is relatively tiny." But that is wrong.

Human genetics is not even close to understanding the differences in the genes and gene expression between two humans, much less the differences between genus homo and genus pan. Deutsch is minimising the extent of a problem that is still poorly defined. That error is peripheral to his argument, but it is a good example of the human tendency to gloss over problems which lie far outside of one's own specialty.

Deutsch is absolutely correct that human level AI is qualitatively different from anything that has yet been conceived -- or at least published -- by AI researchers. And he is correct that the problem requires an entirely new philosophical approach -- perhaps many new approaches to capture the problem well enough to evolve a working AGI.

Deutsch alludes to Popperian Epistemology, which is key to the philosophy of science. As Popper once stated in the first Darwin Lecture at Darwin College, Cambridge:
My position, very briefly, is this. I am on the side of science and of rationality, but I am against those exaggerated claims for science that have sometimes been, rightly, denounced as "scientism". I am on the side of the search for truth, of intellectual daring in the search for truth; but I am against intellectual arrogance, and especially against the misconceived claim that we have the truth in our pockets, or that we can approach certainty. _Karl Popper

In his essay, Deutsch provides several important insights into mistakes that people make when thinking about and discussing AI. Even in his own basic misconceptions, Deutsch illustrates his own cautions and criticisms quite well, somehow adding validity to his underlying argument that the AI enterprise needs a new philosophical underpinning.

When it comes to humans, we can usually safely say "everything you think you know, just ain't so." What you think you know may have more or less in common with reality -- but usually much less. So it is with the enterprise of artificial intelligence, and the attempt to reasonably emulate human intelligence in a machine. When one starts out with the wrong assumptions and premises, it doesn't take long to become very badly lost in the woods. A display of intellectual arrogance only makes one's "lostness" all the more absurd.

Deutsch understands this, and helps to flag the problem so that other thinkers can provide partial solutions. Perhaps sometime in the future, either humans or machines can take these partial solutions and assemble a suitable workaround.

Labels:

Bookmark and Share

19 May 2012

Saturday Morning Cartoon for the Cognitively Boosted


This video by Eric Schadt gives us a look at the incredible complexity of dynamic gene expression in humans, primarily in the brain. If you pay attention, you will begin to understand the challenge of understanding brain disease and intervening pharmacologically in brain gene expression.

Other videos from the same conference as this presentation was given. Videos at this symposium tend to be under 30 minutes in length.

Other videos from conferences on cognitive science that you may find interesting: 2011 MIT Brains, Minds, and Machines Several panel discussions, with many famous cognitive scientists participating, if you want to attach a face and voice to a cogsci author you may have read.

2006 IBM Almaden Conference on Cognitive Computing
You will find several classic and useful presentations in this group of videos.

2004 Columbia University Brain and Mind

H/T Brian Wang

Labels: , ,

Bookmark and Share

22 October 2011

Debating a Near-Term Singularity: Kurzweil vs. Allen

When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThink
In the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.

Microsoft co-founder and billionaire Paul Allen recently expressed skepticism about Kurzweil's timeline for the singularity, in a Technology Review article.
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.

While we suppose this kind of singularity might one day occur, we don't think it is near.

...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen
Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.

Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!

But now, Ray Kurzweil has chosen the same forum to respond to Paul Allen's objections:
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.

...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.

...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.

...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil
Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile way.

Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere.

This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionistic analyses and solutions of previous problems.

Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from the humans who eventually birth the singularity.

Written originally for Al Fin, the Next Level

Labels: ,

Bookmark and Share

14 September 2011

Wealth of Papers from Recent Artificial General Intelligence Conference Held on Google Mountain View Campus

Brian Wang presents a nice overview of the recent 4th Conference on AGI, held in the heart of Silicon Valley. This year's AGI conference seems to represent an important evolution in much of the thinking in the AGI field, with a growing depth and sophistication of approach to the problems involved.

To get a better idea of what I am talking about, here are links to most of the papers presented at the conference.

And here are papers from a special workshop on "Self Programming in AGI Systems"

Videos from 3rd Conference on AGI

Artificial general intelligence of human level or higher, would be a radically disruptive technology to modern societies. Along with breakthroughs in scalable robotics, universal nano-assemblers, and a mastery of biological gene expression, a breakthrough in AGI would quickly overhaul most of the bases of modern economics and most other important foundations of everyday life in high tech societies.

Al Fin cognitive scientists have presented many criticisms to mainstream AI approaches -- particularly to the idea that human intelligence can be represented algorithmically. One of the papers presented at this year's AGI conference elaborates on this idea: "Real World Limits to Algorithmic Intelligence"

The biological basis of mathematical competencies is an interesting look by Aaron Sloman at the development of spatial and mathematical concepts in humans. (via Brian Wang) Sloman touches on the idea of the non-verbal or pre-verbal metaphor, an important key to understanding human learning and thought.

Overall, Al Fin cognitive scientists are pleased at the direction the AGI movement is taking, on the basis of the AGI-4 papers they have read, and on the topics covered generally.

There is no doubt a great deal of hidden treasure in the many papers provided at the conference links above. For those who find this sort of thing interesting, enjoy.

Labels:

Bookmark and Share

12 July 2011

Brain from the Bottom Up: Spontaneous Birth of Synchrony in Small Neuronal Networks

More 13 July 2011: Brian Wang looks at the same research, with an emphasis on the hardware (electronic) aspect. It is fitting to look at both the neurons and the electronics, since the coming cybernetic biosingularity will be dependent upon both.
Human intelligence and consciousness are poorly understood, even by cognitive scientists, neuroscientists, and consciousness specialists. No one understands how to build a human intelligence from scratch, much less how to build a non-human intelligence capable of interacting with humans and the outside world on its own terms. But researchers at Tel Aviv University from the departments of Electrical Engineering and Physics, have taken a fascinating approach to building the basic components of brains: networks of biological neurons. Something wonderful happened when enough cultured neurons linked together in network: They spontaneously "synched up."
Background


Information processing in neuronal networks relies on the network's ability to generate temporal patterns of action potentials. Although the nature of neuronal network activity has been intensively investigated in the past several decades at the individual neuron level, the underlying principles of the collective network activity, such as the synchronization and coordination between neurons, are largely unknown. Here we focus on isolated neuronal clusters in culture and address the following simple, yet fundamental questions: What is the minimal number of cells needed to exhibit collective dynamics? What are the internal temporal characteristics of such dynamics and how do the temporal features of network activity alternate upon crossover from minimal networks to large networks?


Methodology/Principal Findings


We used network engineering techniques to induce self-organization of cultured networks into neuronal clusters of different sizes. We found that small clusters made of as few as 40 cells already exhibit spontaneous collective events characterized by innate synchronous network oscillations in the range of 25 to 100 Hz. The oscillation frequency of each network appeared to be independent of cluster size. The duration and rate of the network events scale with cluster size but converge to that of large uniform networks. Finally, the investigation of two coupled clusters revealed clear activity propagation with master/slave asymmetry.
Conclusions/Significance


The nature of the activity patterns observed in small networks, namely the consistent emergence of similar activity across networks of different size and morphology, suggests that neuronal clusters self-regulate their activity to sustain network bursts with internal oscillatory features. We therefore suggest that clusters of as few as tens of cells can serve as a minimal but sufficient functional network, capable of sustaining oscillatory activity. Interestingly, the frequencies of these oscillations are similar those observed in vivo. _PLoS
More papers by Mark Shein Idelson

Brain synchrony is an important topic of study, linked to consciousness, memory, learning, and normal function of general human brain activity. But synchronous oscillations are also programmed into the neurons themselves, at the smallest level of neuronal organisation. The challenge now, is to build "networks of networks", to discover the communications strategies which interconnected networks will evolve.

Contrast such a biological, bottom up approach with complex machine models of brain function such as the SpiNNaker project out of the University of Manchester, or the Human Brain Project (HBP) led by Henry Markram at Ecole Polytechnique de Lausanne.

Both of the above brain modeling approaches using computers, are based upon bottom-up theories of how brains work. The Lausanne project (HBP) is far more detailed -- going down to the ion channel level of neurons. The Manchester approach is impressive in its parallel computing ambitions, but it begins at the individual "neuronal spiking" level. SpiNNaker is more of a hybrid CompSci:Neurosci approach, than an actual model of the brain like the HBP.

Conventional artificial intelligence approaches do not mimic brain function closely, and are generally more "top-down" approaches, utilising conventional algorithmic concepts of mainstream computer science. Such approaches are doomed to failure before they even begin, as the last 70 years of conventional AI attempts continue to demonstrate.

In reality, brains must be grown. And new types of brains have to be evolved. Not necessarily from biological materials, but up until now the only working brains we know are biological. The first successful autonomous brains are likely to be evolved either from biological materials, or using ingenious abstractions of processes which emerge from biological mechanisms.

Al Fin cognitive scientists suggest that both the Lausanne approach and the Manchester approach are abstracted at the wrong level, if they wish to provide rapid paths to evolved intelligences. Creative human beings will have to discover the appropriate balance, but they will certainly be aided by computing systems in doing so. This is not gobbledygook nor is it AI-psychobabble. It is the genuine crux and pivot point of the problem.

What are the implications for the singularity? There will be no "uploading of consciousness" for the foreseeable future. The cyborg biosingularity is still on schedule for the decade between 2020 and 2030, if humans can avoid an extended Obama Dark Ages. The main question is how many of the cyborg components will be biological in origin, and how many will be non-biological (probably utilising nanotechnology).

Labels: , , , ,

Bookmark and Share

31 May 2011

Evolving Technium Landscapes of Mind

Just because we are conscious does not mean we have the smarts to make consciousness ourselves. Whether (or when) AI is possible will ultimately depend on whether we are smart enough to make something smarter than ourselves. We assume that ants have not achieved this level. We also assume that as smart as chimpanzees are, chimps are not smart enough to make a mind smarter than a chimp, and so have not reached this threshold either. While some people assume humans can create a mind smarter than a human mind, humans may be at a level of intelligence that is below that threshold also. We simply don't know where the threshold of bootstrapping intelligence is, nor where we are on this metric. _KevinKelly
Technium

Kevin Kelly has created a "Taxonomy of Minds" as a way of classifying different types of minds and what they might be able to do.
Precisely how a mind can be superior to our minds is very difficult to imagine. One way that would help us to imagine what greater intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.

Imagine we land on a alien planet. How would we describe or measure the level of the intelligences we encounter there -- assuming they are greater than ours? What are the thresholds of superior intelligence? What are the categories of intelligence in animals on earth? _Read the rest...TaxonomyofMinds
Technium

The actual development of superior minds is more likely to occur via evolutionary mechanisms, rather than from straightforward design from principle. The adaptive landscape graphic above provides a small portion of an evolutionary adaptive landscape. Creatures that achieve the higher peaks may be capable of achieving greater feats, but also may be more subject to extinction when the environment shifts -- or when the adaptive landscape is enlarged by merging with a previously separate adaptive landscape (building a bridge between islands, tunneling through a mountain chain, digging a canal through an isthmus, or the emergence of an intergalactic wormhole).

Rather than waiting until our minds become capable of creating other minds, it is more likely that humans will create an evolutionary landscape from which a more intelligent mind than human minds might emerge.
Recently, in conversations with George Dyson, I realized there is a fifth type of elementary mind:

5) A mind incapable of designing a greater mind, but capable of creating a platform upon which greater mind emerges.

This type of mind cannot figure out how to birth an intelligence equal to itself, but it does figure out how to set up conditions of evolution so that a new mind emerges from the forces pushing it. _Technium
This is the approach to AI which Al Fin cognitive scientists have been promoting and utilising. It would be fooling one's self to imagine that it will be easy to evolve a smarter mind. But at least it is not impossible, as most conventional approaches to AI are proving themselves to be. (conventional AI researchers are attempting quantitative solutions where qualitative solutions apply)

There is something quite amusing here: The human mind itself can flit among the taxonomy of minds, at any given time. Because of how the human brain evolved, and the paths we have taken in development, each one of us is multitudes. Without a doubt, we all need better training in using our minds.

More: An interesting set of links to sources which expect or assume the imminent creation of a super-human machine intelligence (and a consequent "singularity") and a few sources which are critical of such a "hard take-off" to singularity superintelligence

Al Fin is among the skeptics of the "techno-singularity" concept. Rather, Al Fin expects any near-term singularity to be of the "bio-singularity" variety.

Labels: ,

Bookmark and Share

15 May 2011

Human Brain Project Moves Toward Human Cortex Model

Spiegel

Henry Markram's Human Brain Project in Lausanne, is competing for funding from the FET Flagship Initiative, to the tune of 1 billion Euros, disbursed over a ten year period. Markram's goals are extremely ambitious, and unprecedented. He aims to model the human cerebral cortex to an exquisite degree of precision. Markram expects that his model of the human brain will be so exact, that he will be able to study inaccessible brain diseases and devise impossible brain cures by using his model. He may be right. But in only ten years?
Scientists are paying particular attention to the cerebral cortex. This layer on the outside of brain, only a few millimeters thick, is the most important condition of it evolution. It is the starting point for efforts to understand what makes us tick -- and for endeavors to find solutions when things go wrong. Our brain builds its version of the universe in the cerebral cortex. The vast majority of what we see doesn't enter the brain through the eye. It is instead is based on the impressions, experiences and decisions in our brain.

Markham already completed important preparatory work for the computer modeling of the brain with his Blue Brain Project, an attempt to understand and model the molecular makeup of the mammalian brain. He modeled a tiny part of a rat brain, a so-called neocortical column, at the cell level. To understand what one of these columns does, it's helpful to imagine the cerebral cortex as a giant piano. There are millions of neocortical columns on the surface, and each of them produces a tone, in a manner of speaking. When they are simulated, the columns produce a symphony together. Understanding the design of these neocortical columns is a holy grail of sorts for neuroscientists.

It is important to understand the rules of communication among the nerve cells. The individual cells do not communicate at random, but instead seek specifically targeted communication partners. The axes of nerve cells intersect at millions of different points, where they can form a synapse. This makes communication between individual neurons possible. In a recent article in the journal Proceedings of the National Academy of Sciences, Markram writes that such connections are also developed entirely without external influence. This could indicate a sort of innate knowledge that all people have in common. Markram refers to it as the "Lego blocks" of the brain, noting that each person assembles his own world on the basis of this innate knowledge. _Spiegel
The object of study for the Human Brain Project may be the most complex dynamic system in the universe. The attempt would be impossible without the most sophisticated computing hardware and software available. And one must have more than a mere fistful of Euros to acquire such advanced goodies.
Modeling all of this in a computer is extremely complex. Markram's current model encompasses tens of thousands of neurons. But this isn't nearly enough to come within striking range of the secret of our brain. To do that, scientists will have to assemble countless other partial models, which are to be combined to create a functioning total simulation by 2023.

The supercomputers at the Jülich Research Center near Cologne are expected to play an important role in this process. The brain simulation will require an enormous volume of data, or what scientist Markram calls a "tsunami of data." One of the challenges for scientists working under Thomas Lippert, head of the Jülich Supercomputing Centre, is to figure out how to make the computer process only a certain part of the data at a given time, but without completely losing sight of the rest. They also have to develop an imaging method, such as large, three-dimensional holograms, to depict the massive amounts of data.

All it takes is a look at the work of Jülich neuroscientist Katrin Amunts to understand the sheer volume of information at hand. The team she heads is compiling a detailed atlas of the human brain. To do so, they cut a brain into 8,000 slices and digitized them with a high-performance scanner. The brain model generated in this way consists of cuboids, each measuring 10 by 10 by 20 micrometers, and the size of the data set is three terabytes. Brain atlases with higher resolutions, says Amunts, would probably consist of more than 700 terabytes _Spiegel
The answer to the question posed above is: No, this goal cannot be met within a time frame of ten years. Because the challenge is not merely quantitative -- a matter of compiling the precise assembly of terabytes to create a brain atlas. The goal is to create a dynamic, interactive model of incredible plasticity -- a model which changes itself moment to moment. The "700 terabyte" requirement mentioned above is just the starting point -- the bare beginning -- in the assembly of such a dynamic and ever-changing model.

But the problem is even harder -- much, much harder. The quantitative complexity -- even in dynamic flow -- is nothing when compared to the qualitative complexity, which is nowhere near to being solved by Markram's team.

The project as described in brief above is an excellent starting point. Much can be learned from such an approach. But starting points do not necessarily point directly toward the end that one seeks. Rather, they point somewhere "out there." It is for the questers to continuously adjust their headings -- and often they are forced to adjust their goals.

Good luck to Henry and his team -- with the funding and with the ongoing project. It is an ambitious goal worthy of any scientist.

Labels: , , ,

Bookmark and Share

05 May 2011

Artificial Intelligence: Experts Admit that AI Stinks

Marvin Minsky, Patrick Winston, Noam Chomsky, and other thinkers, researchers, engineers, and scientists, think that artificial intelligence has gone wrong. And the way things have been going, it is not likely that AI researchers will get on "the right track" for some time -- since most of them cannot seem to understand the problem.

Several up and coming AI workers (plus Ray Kurzweil) have predicted that human-level AI will be achieved within the next 10 to 20 years. Where is the progress which they can point to, in order to justify these claims? Within their own imaginations, apparently. According to some of the thinkers and AI pioneers quoted at the link above, the problem of AI is not being approached correctly. One of the most piercing criticisms of the field seems to be that it is focusing on tactics rather than strategy.
...clearly, the AI problem is nowhere near being solved. Why? For the most part, the answer is simple: no one is really trying to solve it. This may come as a surprise to people outside the field. What have all those AI researchers been doing all these years? The reality is that they have largely given up on the grand ambitions of AI and are instead working on increasingly specialized subproblems: not just machine learning or natural-language understanding, say, but issues within those areas, like classifying objects or parsing sentences. _TechnologyReview
This is clearly true. What is tragic is that a large proportion of the current crop of top-level AI researchers do not seem to understand the difference between strategy and tactics, in the context of AI. Strategy in AI calls for much deeper level thinking than tactics, which may in fact be beyond the capacity of most researchers -- and even beyond the range of philosophers such as Dan Dennett, who has relatively low expectations for human-level AI in the foreseeable future.

As brilliant as earlier AI researchers a John McCarthy, Marvin Minsky, Seymour Papert may be (and may have once been) in their hay-day, the extent of the problem of human-level AI was too poorly defined for anyone to grasp the challenge.

Modern researchers who make boastful claims for the likelihood of achieving human-level AI in 10-20 years do not have that excuse. Clearly neither Kurzweil nor the other AI hopefuls truly have a grasp of the problem as a whole. And that failure will be their undoing.

What would it take to succeed at human-level AI? A closely-knit, multidisciplinary team of thinkers willing to try outlandish and counter-intuitive approaches to the problem -- well out of the mainstream. To achieve human-type machine intelligence, high level expertise in one's field is but the barest beginning, hardly a single step in the journey of a thousand miles. One must also have a child-like approach to the world, be both brilliant and incredibly humble in the face of the ineffable, and be able to integrate complex ideas from both within and without his own field of expertise into a new, working holism.

Of course it sounds like psycho-babble, but when approaching something far too complex for words, such failures of communication are inevitable. Al Fin cognitive engineers recommend that the concepts of "embodied cognition" and "preverbal metaphor" be kept foremost in in the minds of any hopeful AI developers.

For everyone else, don't get your hopes up too high. Higher education in the advanced world -- particularly in North America -- is moving into serious difficulty, which means that research facilities and funding are likely to be cut back, perhaps severely. The economies of the advanced nations, and the US in particular, are being badly mismanaged, which means that private sector efforts will also likely be cut back. The problem may require the emergence of an individual with the brilliance of Einstein, the persistence of Edison, and the wide-ranging coherent creativity of a da Vinci.

In other words, people need to become smarter -- at least some people. Then they can learn to evolve very smart machines. And perhaps to interface with networks of these very smart machines. Then, possibly, using this symbiotic intelligence to design even smarter people.

Because let's face it: Humans -- especially PC-cultured humans -- will only take us to the Idiocracy, sooner or later. And the Idiocracy will be not only stupid, but downright brutal.

Labels: ,

Bookmark and Share

22 April 2011

Double Plus Overhype Ado About Artificial Synapse?

There's a news story replicating on the web right now about a "Functioning Synapse Created Using Carbon Nanotubes," for instance here and here.

....the circuit has not actually been constructed, so the "apparatus" photo there is kind of silly. It just gives the false impression that a synapse model was actually built physically with analog components.

...[ed: all that we have is] an electrical circuit schematic that in turn depends on certain SPICE models of carbon nanotube FETs (which have apparently been available since 2006). So in other words, this circuit is a particular model of a synapse being simulated with a simple circuit. _Science20

Samuel Kenyon points out at the Science20 article linked above, that the "artificial synapse" is only a simulated circuit using the SPICE electronic simulation program. But the overhype is doubly overdone, because even if the researchers had actually built a real, physical circuit that functioned as an "artificial synapse", it would still not put them any closer to actually building an "artificial brain."
PDF Source (via Science20)
This overhyped excitement is reminiscent of IBM scientist Dharmendra Modha's claim that he had built a computer that was the equivalent of "a cat's brain." Initially, most science blogs (except Al Fin) accepted Modha's claim at face value. Then when Henry Markram came out publicly to refute the claim, Modha backed off and clarified, and almost everyone agreed in the end, that it was all pretty much ado about nothing.

It is the same thing here, where science and tech blogs initially rush to accept exaggerated claims in press releases. Then, little by little, sceptics step up and insist upon clarifications and qualifications of claims, until the claims are downgraded to the point that eventually no one can remember what the fuss was about.

In the case of "the artificial synapse", it is important to understand why this real world device is not even a synapse, much less a possible ingredient for an artificial brain.

The USC and Stanford researchers have designed a computer model of "an artificial synapse," not an actual artificial synapse. But even if it were a real synapse, would engineers be able to use it to assemble an artificial brain? No. And the reason why one thing does not naturally lead to the next is crucial to an understanding of how real brains work -- real brains being the only working proof of concept of intelligence in the known universe.

Artificial intelligence enthusiasts will rush to say that working brains do not necessarily have to work just like the bio-brains we know now. But then, what is the point of emulating a tiny component of a bio-brain in the first place, if you cannot use it to build a functioning brain, as we understand it? In other words, if your objective is to build a new class of brain, why start with a poor imitation of a low level component of a bio-brain? Why not start with something "better" from the get-go? [By "better", I mean faster, more versatile, etc etc]

Here is the reason: Because artificial intelligence researchers do not have a clue as to how to build an intelligent brain. And so they are practising a subtle form of cargo cult science.

It's okay. We all understand that rents and utilities must be paid, the price of gasoline is high, everything costs money. Academics must publish or perish, and getting research grants to build "artificial synapses" does sound kind of sexy. Anything to keep the lights on, right?

But all the same, it is important to understand that brains are not the plural of "synapse." It is time to stop pretending that one has made progress toward AI, when nothing of the sort has happened.

More: It is important to understand that the bulk of the exaggeration comes from press releases and media coverage. Here is the actual conclusion from the research study referred to:
A carbon nanotube synapse typical of cortical synapses has been designed and simulated using SPICE. While the simulations were successful, the design of a single typical synapse is only a small step along the path to a synthetic cortex. The variations in synapses, including inhibitory synapses, will be the focus of future research. Predicting the interconnection capabilities of nanotube circuits is also important in understanding the future prospects for a synthetic cortex. _PDFeve.usc.eduPDF


Unfortunately, as computer modelers try to more realistically model the events in the brain at cellular and molecular levels, computing power and computing time demands explode out of control very rapidly. More, the researchers above do not seem to understand the key facts of brain function upon which conscious intelligence is balanced: time-dependent high level cross-brain synchronisation (via evolved white matter pathways) of evolved multiple modular (grey matter) brain centers from brain stem to neocortex, dancing alongside sensory input, jostled by memory, under the changing lights of emotion, and swept up in hormonal tides and chaotic flows of molecules...

More complex than one imagines? More complex than one can possibly imagine. The job is simply too hard for intelligent design. Only evolution will do. We need to get better at intelligently designing evolution. ;-)

Labels: ,

Bookmark and Share

03 March 2011

Playthings of the Gods

Technology Review

Leon Chua -- father of Amy Chua -- conceived the memristor in a paper back in 1971. The memristor is a resistor with a memory of an earlier state. It behaves differently, depending upon its history. Because inter-neuronal synapses typically also behave differently, depending upon their histories, the memristor is often seen as a building-block for creating more brain-like computers. Researchers are already simulating what a memristor-based computing system might look like:
Memristors are resistors that "remember" the state they were in, which changes according to the current passing through them. They are expected to revolutionise the design and capabilities of electronic circuits and may even make possible brain-like architectures in silicon, since neurons behave like memristors.

Today, we see one of the first revolutionary circuits thanks to Yuriy Pershin at the University of South Carolina and Massimiliano Di Ventra at the University of California, San Diego, two pioneers in this field. Their design is a memristor processor that solves mazes and it is remarkably simple.

...Pershin and Di Ventra begin by creating a kind of a universal maze in the form of a grid of memristors, in other words an array in which each node is connected to another by a memristor and a switch. This can be made to represent any regular maze by switching off certain connections within the array.

Solving this maze is then simple. Simply connect a voltage across the start and finish of the maze and wait. "The current flows only along those memristors that connect the entrance and exit points," say Pershin and Di Ventra. This changes the state of those memristors allowing them to be easily identified. The chain of these memristors is then the solution.

That's potentially much quicker than other maze solving strategies which effectively work in series. "The maze is solved in a massively parallel way, since all memristors in the network participate simultaneously in the calculation," they say. _TechnologyReview_via_NextBigFuture

One of the problems with asking a physicist, engineer, or computer scientist to devise a brain-like computer, is that persons trained strictly within these disciplines are not likely to know which elements of brain functioning should be "simplified" or "abstracted", and which elements should be closely copied.

The pursuit of artificial intelligence is rife with failed promises and predictions, over the past 60+ years. If we are not to go at least another 60 years without meaningful success, we will need researchers who are cross-trained in multiple disciplines relating to the problem.

The research described in the Technology Review article above was based upon the simulation of an array of memristors -- not on an actual memristor circuit. But even with real memristors, the circuit is simplistic in the extreme. The idea that one could assemble large numbers of simplified "synapses" into something that might behave like a biological brain -- in any meaningful way -- appears silly to anyone with even a basic understanding of how the brain works. And yet such silliness represents one of many parallel hopes for a so-far failed endeavour: artificial intelligence.

The synapse is not the basic unit of human intelligence or consciousness. The basic unit of human consciousness is something far less substantial and more ephemeral. It exists at multiple logical levels above the synaptic level. It is dependent upon the simultaneous function of trillions of synapses of distinctly multiple types, involving efferent, afferent, and re-entrant activity at multiple logical levels.

What the researchers describe in the Technology Review article is the simulation of a toy. Not the toy itself -- a simulation of the toy. The human brain is not a toy. Unless, of course, you are a god.

Labels: ,

Bookmark and Share

16 February 2011

Human vs. Computer: How About a Real Challenge?

Update 18Feb2011: AI researcher Ben Goertzel presents some thoughts on Watsons ascendancy at H+Magazine online. H/T Brian Wang

Much has been made about the recently televised Jeopardy victory by IBM Watson, and earlier victories and impressive performances by specialised chess computers such as Blue Gene etc. Even in poker, computers are increasingly seen as threats to human dominance, thanks to clever human programmers. In the game world, Go is most often seen as a game where computers have not come close to experts.

What do these massive, power gobbling, ultra-pampered, spoon-fed, one-trick-pony game-playing super computers tell us about the human vs computer rivalry? Realistically, the rivalry is still human vs. human, with one team of humans utilising ultra-fast electronics devices to store, "analyse", and retrieve massive quantities of data, to gang up on a single human opponent.

Watson is certainly faster to the button than its human opponents, but we already knew that electrons were faster than nerves. Once the "natural language processing" trick was mastered, Watson could more readily lock out his opponents from responding -- even when they both clearly knew the answer.

But Watson could not drive itself home after the game, could not flush away its excretions (heat) by itself, could not feed itself, etc. In the end, Watson is a very expensive gimmick which served as a showcase for various specialised programming problems.

Perhaps if Watson could master all the games mentioned above, at once, and defeat experts in all of the games, it would be impressive as a game-player. But not really. Look at all the money, mass, and energy tied up in the junkpile called Watson. How would IBM make it more capable of playing multiple games? By throwing more mass, money, and energy into the already-huge junkpile. Not very clever, really, compared with the three pound human brain and all the things it can do -- including designing, building, repairing, and programming "smart" computers.

It all points out the fact that the state of artificial intelligence is pretty pathetic, all in all. Despite over 60 years of promises to create human-level intelligence "within 10 years", AI still stinks badly, and promises more of the same into the forseeable future.

The Jeopardy challenge -- like all similar challenges -- was a huge and expensive publicity hullabaloo. It is quite likely to damage the Jeopardy brand in the long run. It certainly puts forth an entirely false idea about the modern capability of computers, vis-a-vis humans, to reason and make decisions.

What would be a real challenge for Watson? How about a spontaneous, unplanned race over an extensive, lengthy, novel, 3-D obstacle course with ladders, walls, tunnels, slides, sand, and foot-deep water traps -- against a 5 year old human child?

Let's face it: Modern life requires humans to overtly or covertly (via proxies) partner with computers to achieve optimum performance in large areas of our lives. But what will it take to get computers to the point where they are consciously setting the agenda for humans, rather than the other way around?

It will take an entirely new "substrate of thought" than the high speed digital architectures currently used to such great -- if ultra-specialised -- effect. Worse, modern AI researchers for the most part have no idea what form such a new substrate would take. Certainly they do not understand the substrate for the only proof of concept of conscious intelligence which currently exists -- the human brain.

Too much like robots themselves, too many AI researchers unwittingly plod along artificial pathways leading to nowhere but diminutive local optima. Watson is only one illustration of the kludgy phenomenon.

What will it take, and how long will it take to discover it? There are limits to pure reason and speculation. Experimentation is necessary. Hands must be dirtied and hypotheses must be generated and tested. For the luggiest of lugheads out there, we need much better challenges than chess, Jeopardy -- or even Go -- to spur the effort required.

Labels: ,

Bookmark and Share

24 November 2010

Memristor Brains? No, But Likely a Step in the Right Direction

IEEE

Brian Wang presents a fascinating glimpse at the next stage of attempted machine intelligence -- driven by DARPA grants. The approach will likely involve the use of the Chua memristor -- or similar nano-scaled electronic devices. DARPA has specified its requirements for its new family of scalable and adaptive electronic thinking systems, and it appears that the memristor family of devices may be the best approach for government contractors wishing to collect their fees.
Researchers have suspected for decades that real artificial intelligence can't be done on traditional hardware, with its rigid adherence to Boolean logic and vast separation between memory and processing. But that knowledge was of little use until about two years ago, when HP built a new class of electronic device called a memristor. Before the memristor, it would have been impossible to create something with the form factor of a brain, the low power requirements, and the instantaneous internal communications. Turns out that those three things are key to making anything that resembles the brain and thus can be trained and coaxed to behave like a brain. In this case, form is function, or more accurately, function is hopeless without form.

Basically, memristors are small enough, cheap enough, and efficient enough to fill the bill. Perhaps most important, they have key characteristics that resemble those of synapses. That's why they will be a crucial enabler of an artificial intelligence worthy of the term.

The entity bankrolling the research that will yield this new artificial intelligence is the U.S. Defense Advanced Research Projects Agency (DARPA). When work on the brain-inspired microprocessor is complete, MoNETA's first starring role will likely be in the U.S. military, standing in for irreplaceable humans in scout vehicles searching for roadside bombs or navigating hostile terrain. But we don't expect it to spend much time confined to a niche. Within five years, powerful, brainlike systems will run on cheap and widely available hardware. _IEEE
A step in the right direction? Yes. The memristor family of devices will allow for a nanoscale fabrication of devices which function very much like a inter-neuronal synapse. Creating massively parallel circuits with such devices will allow designers to produce some fascinating -- and possibly quite functional -- computing devices.

But will these devices work anything like the human (or animal) brain? Not anytime soon. Because the designers seem focused on one small, rudimentary aspect of the human brain -- the neuronal synapse -- it is unlikely that they will achieve the "bigger picture" view of how human brains actually work for a long, long time, and after many failures.

But the development of electronic devices which imitate the synapse more accurately will place the pursuit of the machine brain on an entirely different level, above and away from the diminutive local optima which previous AI researchers have been struggling to achieve.

What will it take for memristor family devices to approach human brain level of function? First, it will require the knowledge that the brain has many distinct types of neurons, which form many distinct types of synapses. Next, it will require the awareness that synapses are just the meager beginning of the spark of intelligence. It is actually a vast ensemble of synaptic actions occurring in precise ways at precise times, and affecting precise modular systems of processors, which makes animal-style consciousness and intelligence possible.

Then, it will require the insight that intelligence is "embodied," to start the research down a long, difficult, but final road toward the creation of a rudimentary working machine intelligence.

If you are thinking that there are other approaches to intelligence than the animal or human approach, Al Fin cognitive scientists respond, "of course." But where are these alternative approaches? Where are their proofs of concept, their working prototypes? No closer today, than in the late 1940s and 1950s when absolutely brilliant computer scientists first believed they were within easy reach.

Human level machine intelligence would create a radical revolution of human existence at many levels, in many ways. But such a development does not appear to be very close. Certainly, humans are not ready for it. But a lot of things happen which humans are not prepared to experience. Better start getting ready now.

More: Brain Inspired Computing by Versace (via Brian Wang)

Moneta Neuromorphics Laboratory (via Brian Wang)

Labels: ,

Bookmark and Share

27 August 2010

The Limits of Intelligence; The Farce of Artificial Intelligence

The only working model of human-level intelligence, as far as we know, is the human brain. We have no evidence of any higher form of intelligence anywhere in the universe. Yet scientists from widely varied areas of cognitive studies continue to make unlikely claims that they will achieve reverse-engineering of the human brain within 10 or 20 years. The problem with humans attempting to use machines to emulate intelligence, is that humans do not understand intelligence very well at all.

Recent progress in "memristor synapses" has given reverse-engineers of the brain hope, that they may finally be developing a hardware substrate that is better capable of emulating brain function. But even if that is true, how close do these developments place us to the goal of reverse-engineering a functioning human brain? Bluntly put, not close at all.

Scientists are slowly gaining an appreciation for how human memories are encoded -- within and by the hippocampus. For example, new memory formation requires the hippocampus to be able to produce new nerve cells of various types from stem cells. Some neuroscientists apparently feel that this understanding will help them to discover new "drug targets" for treating memory dysfunction, such as dementia. We should hope so, because dementia and brain atrophy of one form or another waits for virtually all of us -- if we live long enough.

But successful treatment of dementia does not help us to understand how our intelligence works -- except insofar as it provides tools for further research into the intricate mechanisms of human learning, memory, and creative imagination.

The encoding and decoding of human memories (more) has virtually nothing in common with what is generally thought of as "computation." Consequently the substrate of ordinary computation -- such as digital computers -- should not be seen as likely substrates for reverse engineering a human brain.

Human intelligence evolved over millions of years by natural selection, in the course of solving a variety of problems of survival. Human brains are not well evolved to solve the most pressing problems currently facing human societies. The average IQ for human populations is just below 90 points, and on a downward, dysgenic trajectory. Most humans are simply not intelligent enough to solve complex problems -- except those for which the human brain is evolved to solve. Most of the "big" problems of today do not fall within that category.

Even most humans with IQs in the 130 to 180 ++ range are generally not well suited to understand the basis for their own intelligence on any logical level -- much less most or all of them. If the potential to understand our own intelligence rests within the developing embryo and infant child, its critical window of development inevitably passes without the proper training. And so it goes, almost certainly, for a significant number of potential human abilities -- lost out of ignorance. But I digress.

Artificial intelligence research suffers from the lack of individuals with a special combination of trained aptitudes. Brilliant researchers abound in the disparate disciplines of computer science, neuroscience, cognitive psychology, linguistics, anthropology, philosophy, electrical engineering, and a wide array of creative, inventive, and speculative arts and sciences. But workers with the right combinations of skills and attitudes are extremely rare. The potential accomplishments of the uni-disciplinary approach to higher education evaporate very quickly when it comes to solving the extremely hard problems with which we are faced.

Solving the problem will require a different way of thinking about the problem. But that is a virtual impossibility for most people -- no matter how "intelligent."

Contemplate what may be involved in the efficient teaching and learning of "lateral thinking." The most rewarding known examples of lateral thinking occurred by accident. But de Bono claims to be able to teach the skill. It is virtually certain that such teaching is more effective if initiated during childhood -- and more effective in some children than in others.

Modern human knowledge is "full of holes", like a Sierpinski gasket. No matter how conscientiously we set about to fill in the holes, we only create more holes. Humans need to learn to relish this creation of holes, because the more holes we create, the more we have filled in. But the development of such a relishing of the fractal world of knowledge must likely begin in childhood.

Which brings us back to the creation and upbringing of children, their training and the societal milieu in which they are to be raised. We are botching the job rather badly at this time.

More on these topics later.

Labels: ,

Bookmark and Share

22 August 2010

Beyond Kurzweil and Myers: A Useful Brain Emulation Viewpoint

George Dvorsky provides a measured and reasonable approach to the question of machines emulating the human brain in this well written article on "making brains". While quite short and lightly documented, Dvorsky's piece provides a useful outline of the problem, and a fairly sound description of a good approach for attacking the problem.
While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan...The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.

...A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. _SentientDevelopments
Dvorsky goes on to describe the type of multi-disciplinary approach he has in mind, and bravely makes a prediction as to how long the effort will likely take: 50 to 75 years. This is a much longer timespan than Kurzweil and most AI researchers are giving, but I suspect it is closer to a realistic mark.

There are a couple of small criticisms I have to make. Dvorsky expects a workable brain emulation to be built within a "digital substrate":
.... if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.
Here, it seems that Dvorsky has it backwards. It is the persons who believe that intelligence can be made to work in a different physical substrate than the brain who bear the burden of proof to show that intelligence can be "transferred" to the "digital realm." We only have one proof of concept of intelligence up until now, which is a bloody ball of fat resting on a stalk rising between the shoulders of homo sapiens.

In another place Dvorsky asserts:
... the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.
In truth, the brain is far more complicated than we can currently imagine. The question should be: Is the relevant functionality within the brain/mind which generates consciousness and intelligence, perhaps "not as complicated as we currently think?" Al Fin cognitive theorists believe that such a thing is possible, as long as we take care not to stumble amongst the numerous overlapping logical levels which present themselves whenever attempting to deal with this problem.

Dvorsky is quite right that the brain emulation problem is going to require extensive multi-disciplinary effort. We will need multi-disciplinary teams, as well as team members who themselves have multi-disciplinary training.

The great online debate between Ray Kurzweil and PZ Myers continues unabated, but it has very little to do with the eventual creation of a machine intelligence modeled after the brain.

If I had to choose one or the other to lead an effort to create an artificial brain, I would choose Kurzweil, hands down. Myers is an academic on the "intellectual" side -- an intellectual being someone who is rarely challenged by reality when he makes a mistake. Kurzweil's inventions and products have to work. That puts Kurzweil firmly in the reality-based camp, regardless of how many in the media and academia call him a kook.

Labels: , ,

Bookmark and Share

20 August 2010

Neither Ray Kurzweil nor PZ Myers Understand the Brain

Irrepressible bio-blogger PZ Myers has attacked futurist inventor and author Ray Kurzweil on his blog, accusing Mr. Kurzweil of failing to understand the human brain. But it seems that Mr. Myers was unwittingly attacking second-hand accounts of a talk given by Kurzweil, rather than responding to Mr. Kurzweil's actual claims. Kurzweil takes Myers to the woodshed for that mistake.

Other prominent tech- and mind-bloggers such as Brian Wang and George Dvorsky have reacted to this tiff, appropriately pointing readers to Mr. Kurzweil's actual words on the topic.

Lost in all the ballyhoo is the obvious fact that in reality, neither Kurzweil nor Myers understand very much about the brain. But is that clear fact of mutual brain ignorance relevant to the underlying issue -- Kurzweil's claim that science will be able to "reverse-engineer" the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument -- which can be summed up by Myers' claim that Kurzweil is a "kook."

But Kurzweil's amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a "kook" is apparently considered clever in the intellectual circles which Mr. Myers' and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Clearly if Mr. Kurzweil understood the brain, he could simply sit down and design an artificial brain based upon the principles which he already understands. The fact that Kurzweil places the development of such a human-level thinking machine 2 decades in the future, suggests that Kurzweil himself is not attempting to disguise his lack of comprehensive understanding of the brain.

I should point out that it is neuroscientist Henry Markram who is attempting to reverse-engineer a human brain to incredibly exquisite levels of biological detail -- in an attempt to study the function and potential pathologies of the brain. Kurzweil is not taking that path of reverse-engineering, but is rather attempting to extract principles of higher level mental functioning from the study of the brain. These "higher level mental functions" may appear to be quite low-level to a lay-person, but to a neurobiologist they will seem quite high-level indeed.

Life scientists do not understand life, really. Take this interesting Spiegel Online interview with Craig Venter on the genome. You would think that if anyone would understand the genome, it would be Craig Venter. But no, he admits that he does not -- not nearly to the extent that he intends to, at least. That is why he goes to work every day, because he understands just enough to want to understand more.

So while PZ Myers apparently fell off the wavelength upon which Ray Kurzweil was transmitting, that is no reason why the rest of us cannot follow Kurzweil's progress in his quest -- as food for thought.

Full Disclosure: Al Fin has in the past criticised Ray Kurzweil's approach to artificial intelligence as being insufficiently nuanced -- based upon Kurzweil's own writings. But a man with Mr. Kurzweil's track record of accomplishments is not one who should be written off. Such a person has been wrong innumerable times in his past, and has come back to correct his mistakes and move far beyond them. Every person of high achievement must go through such a process of being wrong and learning from it. It is one of the disgraces of modern education, culture, and child-raising that "being wrong" or "failing" at something, is considered to be an object of shame or disgrace. Far from being disgraceful, it is a necessary part of living and learning.

It is the dogmatist who is unwilling or unable to learn from his mistakes who should be avoided. The person who is unwilling to put in the necessary hard work to correct his own faulty assumptions and innate prejudices, is the person who will achieve little in the end. Except, perhaps, for calling everyone who disagrees with him a kook.

Labels: ,

Bookmark and Share

27 April 2010

Brains Like Ours?

Terrence Sejnowski is a Princeton trained physicist who found his way into neurobiology via a Harvard postdoc.  He is currently the head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in La Jolla.  Sejnowski suggests that humans may begin to create "brains like ours" sooner than most people think.
Last November, IBM researcher Dharmendra Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. This news took many by surprise, since he had leapfrogged over the mouse brain and beaten other groups to this milestone. For this work, Modha won the prestigious ACM Gordon Bell prize, which is awarded to recognize outstanding achievement in high-performance computing applications.

However, his audacious claim was challenged by Henry Markram, a neuroscientist at the Ecole Polytechnique Fédérale de Lausanne and the leader of the Blue Brain project, who announced in 2009 that: "It is not impossible to build a human brain and we can do it in 10 years.". In an open letter to IBM Chief Technical Officer Bernard Meyerson, Markram accused Modha of “mass deception” and called his paper a “hoax” and a “scam.”

...Unfortunately, the large-scale simulations from both groups at present resemble sleep rhythms or epilepsy far more closely than they resemble cat behavior, since neither has sensory inputs or motor outputs. They are also missing essential subcortical structures, – such as the cerebellum that organizes movements, the amygdala that creates emotional states and the spinal cord that runs the musculature. Nonetheless, from Modha’s model we are learning how to program large-scale parallel architectures to perform simulations that scale up to the large numbers of neurons and synapses in real brains. From Markram’s models, we are learning how to integrate many levels of detail into these models. In his paper, Modha predicts that the largest supercomputer will be able to simulate the basic elements of a human brain in real time by 2019, so apparently he and Markram agree on this date; however, at best these simulations will resemble a baby brain, or perhaps a psychotic one.... _SciAm
And from there, Sejnowski unfortunately veers off to briefly discuss "intelligent communications systems", then quickly ends his article. In other words, Sejnowski doesn't actually tell us anything about when we might expect to create "brains like ours."

Ray Kurzweil predicts human level computing by 2029, but Mitch Kapor is betting that Ray is wrong. Henry Markram's prediction for an artificial human brain by 2019 goes far beyond what the Blue Brain website is willing to predict, or what some of his more sober colleagues at Lausanne are willing to claim. Ben Goertzel and Peter Voss each believe they are on the trail of artificial general intelligence (AGI), and see no reason why they cannot achieve their goal.

Noah Goodman -- a researcher at MIT's Cognitive Science Group -- is quite forthcoming and honest in this interview at Brian Wang's NextBigFuture. Goodman says that "we could achieve human-level AI within 30 or 40 years", but he also admits that it could take longer.

A startling new approach to massively parallel computing comes from Michigan Technological University working with a research team in Japan.
In their work, instead of wiring single molecules/CA cells one-by-one, the researchers directly build a molecular switch assembly where ∼300 molecules continuously exchange information among themselves to generate the solution. This molecular assembly functions similarly to the graph paper of von Neumann, where excess electrons move like colored dots on the surface, driven by the variation of free energy that leads to emergent computing...

...By separating a monolayer from the metal ground with an additional monolayer, the NIMS/MTU team developed a generalized approach to make the assembly sensitive to the encoded problem. The assembly adapts itself automatically for a new problem and redefines the CA rules in a unique way to generate the corresponding solution.

"You could say that we have realized organic monolayers with an IQ" says Bandyopadhyay. "Our monolayer has intelligence."

Furthermore, he points out that this molecular processor heals itself if there is any defect. It achieves this remarkable self-healing property from the self-organizing ability of the molecular monolayer.

"No existing man-made computer has this property, but our brain does: if a neuron dies, another neuron takes over its function" he says.

With such remarkable processors that can replicate natural phenomena at the atomic scale researchers will be able to solve problems that are beyond the power of current computers. Especially ill-defined problems, like the prediction of natural calamities, prediction of diseases, and Artificial Intelligence, will benefit from the huge instantaneous parallelism of these molecular templates.

According to Bandyopadhyay, robots will become much more intelligent and creative than today if his team's molecular computing paradigm is adopted. _Nanowerk
Most intelligent observers of the AI field who have been able to take a step back and view the phenomenon from the perspective of many decades of history, are forced to conclude that a new physical substrate -- other than von Neumann architecture supercomputers -- will be necessary before anything close to a human-level AGI can be built.

Whether using memristors, qubits, molecular monolayers, fuzzy logic - enabled neural nets with genetic algorithmic ability, or physical substrates and architectures not yet envisioned or announced, AGI researchers of the near to intermediate future will eventually make rapid strides toward useful machine intelligence -- once the right architectural substrate is discovered.

In the meantime, cognitive scientists are learning a great deal about how the brain works, and how artificial mechanisms may better emulate brain function. I suspect that both Modha and Markram (along with several other prognosticators including Kurzweil) may have allowed wishful thinking to get the better of them, when making timeline predictions.

In the dramatic history of genetic science, only after the breakthrough of Watson and Crick could molecular biology explode into the present and future. The ongoing history of artificial intelligence is still lacking its "Watson and Crick." Progress can be made, but not the explosive progress that is necessary to approach human level intelligence.

Labels: , ,

Bookmark and Share
Older Posts
Al Fin Main Page
Enter your Email


Powered by FeedBlitz
Google
WWW AL FIN

Powered by
Blogger

``