We Are All Cyborgs Now: Getting to Genuine Convergence
Convergent technology refers to the combination of multiple functions into a single device. Consider a computer pad which might be used for web browsing, as a video camera, as a game player, a communications device, a media player, a book reader, a data archiver, a photo album, and so on. With enough storage, a person could download much of his life onto an inexpensive handheld device, for easy reference. Useful for those with memory problems, and for everyone else as well.
But I am referring to a higher order of convergence and interactivity, which approaches the level of cyborg augmented reality. Let's use some recent research from Stanford as a springboard into the concept:
Ian Gotlib's group at Stanford University, California, studies girls aged 10 to 14 years whose mothers suffer from depression. Such girls are thought to be at higher-than-normal risk of developing the condition themselves, in part because they may inherit their mothers' tendency to "amplify" unpleasant information. Although none of the girls has yet experienced a depressive episode, Gotlib has found that their brains already overreact to negative emotional stimuli – a pattern they share with their mothers and other depressed people.An interesting setup, using fMRI neurofeedback. But the researchers went further, and tested another group of girls with a much simpler setup to see how the two approaches would compare.
Gotlib is studying whether these young subjects can use interactive software and brain-imaging hardware to "rewire" their brains by unlearning this negative bias. In a pilot experiment, eight girls used a neural feedback display to learn how to control activity in a network of interrelated brain regions that have been linked to depression – these include the dorsal anterior cingulate cortex, anterior insula and dorsolateral prefrontal cortex.
The level of activity in this network was measured using an functional MRI scan and displayed to the girls in the form of a thermometer on a computer screen. The girls were shown sad or negative pictures that might ordinarily raise their "temperature", and tried to lower that "temperature" by adopting more sanguine mental states. They were then advised to try to recreate that mindset in their daily lives.
A control group unknowingly watched someone else's scan output instead of their own, so they didn't actually learn how to control their brain activity. _NewScientist
Another set of girls in the pilot experiment received their training through a simple computer game instead. In this game, a pair of faces appeared on a screen every few seconds: they would be either neutral and sad, or neutral and happy. Then a dot replaced one of the faces, and the "game" was to click on the dot. For the eight girls in the control group, the face replaced by the dot was selected at random, but for eight girls in the experimental group, the dot always replaced the more positive face in the pair. Over a week of playing this game daily, these girls were in effect being trained to avoid looking at the sad faces.A simple computer game is much less expensive than a huge fMRI machine. And it would be easier to incorporate into one's pad computer as well. Of course, EEG neurofeedback would serve as well for this purpose as fMRI, and EEG could be incorporated into a pad computer. Can you see the convergence beginning to form?
Gotlib himself originally found this concept, called attentional-bias training, so simplistic that he bet Colin MacLeod, a psychologist at the University of Western Australia in Perth who pioneered the technique, that it would not alter psychological symptoms. Gotlib lost his bet.
In his pilot study, both kinds of training significantly reduced stress-related responses – for example, increases in heart rate, blood pressure and cortisol levels – to negative stimuli. These stress responses are a key marker of depression, and they diminished one week after training. The girls in the experimental groups also developed fewer defensive responses to negative faces, such as startled blinking. Control groups showed no such improvement.
...Gotlib is adding more subjects to the training programme and plans to compare their long-term mental health with a parallel cohort of 200 girls, half of whom have depressed mothers, who aren't participating in the study.
He presented his results at the annual meeting of the Society for Research in Psychopathology in Boston in September. _NS
This type of research could easily lead to much broader applications which could detect when we started to fall into a dysfunctional mental feedback loop, and provide timely stimuli which lead us back toward our predetermined goals. Why might we need such devices? Why not just use willpower and heightened consciousness instead?
The many functions where a human brain is superior to a computer depend upon the way that the brain is wired, and how the different parts of the brain communicate. This is virtually all below the level of consciousness, making us largely subconscious machines -- or zombies -- in many of our most important aspects. But ironically enough, by becoming more cyborg-like, we may be able to become less zombie-like.
Brain-computer interfaces (BCI) are typically thought of in terms of helping persons to either compensate for neurological deficits such as stroke or paraplegia, or to rehabilitate from neurological damage. BCIs are also beginning to be utilised in the gaming world, to provide more intuitive game playing. We can also expect much more use of BCIs in the educational environment, for enhanced learning.
It is difficult to explain the explosively revolutionary impact of this type of technology, when used as neurofeedback for learning, mood enhancement, creative invention, mental focus, relaxation, and social interaction. You may think that intervention with such hyper-convergent technology would begin sometime after the birth of a child, but you would probably be wrong.
Once humans discover the revolutionary impact of hyper-convergent computing and BCIs on child raising outcomes, for large segments of the population there will be no going back. Why? Because it is almost inevitable that new, previously unknown critical developmental windows will be discovered, for high level skills that are currently developed only by accident. These windows will be discovered because the new BCIs will not simply be passive monitoring and feedback devices, but will also be "mental probes" using various modalities. You never know what is there until you go in and look.
This is very dangerous territory, where angels fear to tread. But then, as we approach cyborg-hood, we may find ourselves further away from angelhood. Heightened awareness, knowledge, and competence tend to lead us to test ourselves to greater extremes. Some of the things we try may go catastrophically wrong. We have to accept that in advance and take appropriate precautions as we proceed.
One of the greatest dangers is the danger of losing something essential, something perceptive and wise which keeps us from making fatal mistakes, and from falling into traps and wasting time on dead end enterprises. We could easily grow so dependent upon our machine alter egos that we lose much of our natural strength and competence.
That is why the great majority of humans will remain as a control group. At least at first. It will be difficult to keep persons from adopting a technology which may give them an advantage in life outcome, and which is likely to become both widely available and inexpensive, over time.
In addition, periodic mandatory periods of going "offline" would force us to reclaim our "naked human abilities." Failing to take such precautions could leave us extremely vulnerable to an unexpected failure of our technology.
A recent posting here looked at a program for developing a platform for "rebooting civilisation" in case of catastrophic failure. A simple BCI system would make it much easier to build and maintain these essential machines of basic civilisation. BCIs and wearable computers / data archives, could be stored in secure caches, safe from natural and man-made disaster. A primitive version of that idea was presented in the Niven - Pournelle SF novel, "Lucifer's Hammer," where the astrophysicist character stored a precious collection of reference books in a safe and secure cache -- the books were later used to help reboot civilisation. You can imagine how much more effective full sensory neurofeedback BCI archiving would be.
It may seem a long distance from the Stanford research above to the lifelong cyborg existence described. But it is mostly a matter of engineering and experimentation. When Steve Jobs, Steve Wozniak, Bill Gates, Ted Hoff, Robert Noyce, and so many others laid the foundation for cheap, ubiquitous computing, the djinn was already out of the bottle.
Labels: augmented reality, cyborgs, developmental windows, neurofeedback
0 Comments:
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home