The General Intelligence Factor
The General Intelligence Factor
Originally published in Scientific American
No subject in psychology has provoked more intense public controversy than the study of human intelligence. From its beginning, research on how and why people differ in overall mental ability has fallen prey to political and social agendas that obscure or distort even the most well-established scientific findings. Journalists, too, often present a view of intelligence research that is exactly the opposite of what most intelligence experts believe. For these and other reasons, public understanding of intelligence falls far short of public concern about it. The IQ experts discussing their work in the public arena can feel as though they have fallen down the rabbit hole into Alice's Wonderland.
The debate over intelligence and intelligence testing focuses on the question of whether it is useful or meaningful to evaluate people according to a single major dimension of cognitive competence. Is there indeed a general mental ability we commonly call "intelligence," and is it important in the practical affairs of life? The answer, based on decades of intelligence research, is an unequivocal yes. No matter their form or content, tests of mental skills invariably point to the existence of a global factor that permeates all aspects of cognition. And this factor seems to have considerable influence on a person's practical quality of life. Intelligence as measured by IQ tests is the single most effective predictor known of individual performance at school and on the job. It also predicts many other aspects of well-being, including a person's chances of divorcing, dropping out of high school, being unemployed or having illegitimate children [see illustration].
By now the vast majority of intelligence researchers take these findings for granted. Yet in the press and in public debate, the facts are typically dismissed, downplayed or ignored. This misrepresentation reflects a clash between a deeply felt ideal and a stubborn reality. The ideal, implicit in many popular critiques of intelligence research, is that all people are born equally able and that social inequality results only from the exercise of unjust privilege. The reality is that Mother Nature is no egalitarian. People are in fact unequal in intellectual potential--and they are born that way, just as they are born with different potentials for height, physical attractiveness, artistic flair, athletic prowess and other traits. Although subsequent experience shapes this potential, no amount of social engineering can make individuals with widely divergent mental aptitudes into intellectual equals.
Of course, there are many kinds of talent, many kinds of mental ability and many other aspects of personality and character that influence a person's chances of happiness and success. The functional importance of general mental ability in everyday life, however, means that without onerous restrictions on individual liberty, differences in mental competence are likely to result in social inequality. This gulf between equal opportunity and equal outcomes is perhaps what pains Americans most about the subject of intelligence. The public intuitively knows what is at stake: when asked to rank personal qualities in order of desirability, people put intelligence second only to good health. But with a more realistic approach to the intellectual differences between people, society could better accommodate these differences and minimize the inequalities they create.
Early in the century-old study of intelligence, researchers discovered that all tests of mental ability ranked individuals in about the same way. Although mental tests are often designed to measure specific domains of cognition--verbal fluency, say, or mathematical skill, spatial visualization or memory--people who do well on one kind of test tend to do well on the others, and people who do poorly generally do so across the board. This overlap, or intercorrelation, suggests that all such tests measure some global element of intellectual ability as well as specific cognitive skills. In recent decades, psychologists have devoted much effort to isolating that general factor, which is abbreviated g, from the other aspects of cognitive ability gauged in mental tests.
The statistical extraction of g is performed by a technique called factor analysis. Introduced at the turn of the century by British psychologist Charles Spearman, factor analysis determines the minimum number of underlying dimensions necessary to explain a pattern of correlations among measurements. A general factor suffusing all tests is not, as is sometimes argued, a necessary outcome of factor analysis. No general factor has been found in the analysis of personality tests, for example; instead the method usually yields at least five dimensions (neuroticism, extraversion, conscientiousness, agreeableness and openness to ideas), each relating to different subsets of tests. But, as Spearman observed, a general factor does emerge from analysis of mental ability tests, and leading psychologists, such as Arthur R. Jensen of the University of California at Berkeley and John B. Carroll of the University of North Carolina at Chapel Hill, have confirmed his findings in the decades since. Partly because of this research, most intelligence experts now use g as the working definition of intelligence.
The general factor explains most differences among individuals in performance on diverse mental tests. This is true regardless of what specific ability a test is meant to assess, regardless of the test's manifest content (whether words, numbers or figures) and regardless of the way the test is administered (in written or oral form, to an individual or to a group). Tests of specific mental abilities do measure those abilities, but they all reflect g to varying degrees as well. Hence, the g factor can be extracted from scores on any diverse battery of tests.
Conversely, because every mental test is "contaminated" by the effects of specific mental skills, no single test measures only g. Even the scores from IQ tests--which usually combine about a dozen subtests of specific cognitive skills--contain some "impurities" that reflect those narrower skills. For most purposes, these impurities make no practical difference, and g and IQ can be used interchangeably. But if they need to, intelligence researchers can statistically separate the g component of IQ. The ability to isolate g has revolutionized research on general intelligence, because it has allowed investigators to show that the predictive value of mental tests derives almost entirely from this global factor rather than from the more specific aptitudes measured by intelligence tests.
In addition to quantifying individual differences, tests of mental abilities have also offered insight into the meaning of intelligence in everyday life. Some tests and test items are known to correlate better with g than others do. In these items the "active ingredient" that demands the exercise of g seems to be complexity. More complex tasks require more mental manipulation, and this manipulation of information--discerning similarities and inconsistencies, drawing inferences, grasping new concepts and so on--constitutes intelligence in action. Indeed, intelligence can best be described as the ability to deal with cognitive complexity.
This description coincides well with lay perceptions of intelligence. The g factor is especially important in just the kind of behaviors that people usually associate with "smarts": reasoning, problem solving, abstract thinking, quick learning. And whereas g itself describes mental aptitude rather than accumulated knowledge, a person's store of knowledge tends to correspond with his or her g level, probably because that accumulation represents a previous adeptness in learning and in understanding new information. The g factor is also the one attribute that best distinguishes among persons considered gifted, average or retarded.
Several decades of factor-analytic research on mental tests have confirmed a hierarchical model of mental abilities. The evidence, summarized most effectively in Carroll's 1993 book, Human Cognitive Abilities, puts g at the apex in this model, with more specific aptitudes arrayed at successively lower levels: the so-called group factors, such as verbal ability, mathematical reasoning, spatial visualization and memory, are just below g, and below these are skills that are more dependent on knowledge or experience, such as the principles and practices of a particular job or profession.
Some researchers use the term "multiple intelligences" to label these sets of narrow capabilities and achievements. Psychologist Howard Gardner of Harvard University, for example, has postulated that eight relatively autonomous "intelligences" are exhibited in different domains of achievement. He does not dispute the existence of g but treats it as a specific factor relevant chiefly to academic achievement and to situations that resemble those of school. Gardner does not believe that tests can fruitfully measure his proposed intelligences; without tests, no one can at present determine whether the intelligences are indeed independent of g (or each other). Furthermore, it is not clear to what extent Gardner's intelligences tap personality traits or motor skills rather than mental aptitudes.
Other forms of intelligence have been proposed; among them, emotional intelligence and practical intelligence are perhaps the best known. They are probably amalgams either of intellect and personality or of intellect and informal experience in specific job or life settings, respectively. Practical intelligence like "street smarts," for example, seems to consist of the localized knowledge and know-how developed with untutored experience in particular everyday settings and activities--the so-called school of hard knocks. In contrast, general intelligence is not a form of achievement, whether local or renowned. Instead the g factor regulates the rate of learning: it greatly affects the rate of return in knowledge to instruction and experience but cannot substitute for either.
Some critics of intelligence research maintain that the notion of general intelligence is illusory: that no such global mental capacity exists and that apparent "intelligence" is really just a by-product of one's opportunities to learn skills and information valued in a particular cultural context. True, the concept of intelligence and the way in which individuals are ranked according to this criterion could be social artifacts. But the fact that g is not specific to any particular domain of knowledge or mental skill suggests that g is independent of cultural content, including beliefs about what intelligence is. And tests of different social groups reveal the same continuum of general intelligence. This observation suggests either that cultures do not construct g or that they construct the same g. Both conclusions undercut the social artifact theory of intelligence.
Moreover, research on the physiology and genetics of g has uncovered biological correlates of this psychological phenomenon. In the past decade, studies by teams of researchers in North America and Europe have linked several attributes of the brain to general intelligence. After taking into account gender and physical stature, brain size as determined by magnetic resonance imaging is moderately correlated with IQ (about 0.4 on a scale of 0 to 1). So is the speed of nerve conduction. The brains of bright people also use less energy during problem solving than do those of their less able peers. And various qualities of brain waves correlate strongly (about 0.5 to 0.7) with IQ: the brain waves of individuals with higher IQs, for example, respond more promptly and consistently to simple sensory stimuli such as audible clicks. These observations have led some investigators to posit that differences in g result from differences in the speed and efficiency of neural processing. If this theory is true, environmental conditions could influence g by modifying brain physiology in some manner.
Studies of so-called elementary cognitive tasks (ECTs), conducted by Jensen and others, are bridging the gap between the psychological and the physiological aspects of g. These mental tasks have no obvious intellectual content and are so simple that adults and most children can do them accurately in less than a second. In the most basic reaction-time tests, for example, the subject must react when a light goes on by lifting her index finger off a home button and immediately depressing a response button. Two measurements are taken: the number of milliseconds between the illumination of the light and the subject's release of the home button, which is called decision time, and the number of milliseconds between the subject's release of the home button and pressing of the response button, which is called movement time.
In this task, movement time seems independent of intelligence, but the decision times of higher-IQ subjects are slightly faster than those of people with lower IQs. As the tasks are made more complex, correlations between average decision times and IQ increase. These results further support the notion that intelligence equips individuals to deal with complexity and that its influence is greater in complex tasks than in simple ones.
The ECT-IQ correlations are comparable for all IQ levels, ages, genders and racial-ethnic groups tested. Moreover, studies by Philip A. Vernon of the University of Western Ontario and others have shown that the ECT-IQ overlap results almost entirely from the common g factor in both measures. Reaction times do not reflect differences in motivation or strategy or the tendency of some individuals to rush through tests and daily tasks--that penchant is a personality trait. They actually seem to measure the speed with which the brain apprehends, integrates and evaluates information. Research on ECTs and brain physiology has not yet identified the biological determinants of this processing speed. These studies do suggest, however, that g is as reliable and global a phenomenon at the neural level as it is at the level of the complex information processing required by IQ tests and everyday life.
The existence of biological correlates of intelligence does not necessarily mean that intelligence is dictated by genes. Decades of genetics research have shown, however, that people are born with different hereditary potentials for intelligence and that these genetic endowments are responsible for much of the variation in mental ability among individuals. Last spring an international team of scientists headed by Robert Plomin of the Institute of Psychiatry in London announced the discovery of the first gene linked to intelligence. Of course, genes have their effects only in interaction with environments, partly by enhancing an individual's exposure or sensitivity to formative experiences. Differences in general intelligence, whether measured as IQ or, more accurately, as g are both genetic and environmental in origin--just as are all other psychological traits and attitudes studied so far, including personality, vocational interests and societal attitudes. This is old news among the experts. The experts have, however, been startled by more recent discoveries.
One is that the heritability of IQ rises with age--that is to say, the extent to which genetics accounts for differences in IQ among individuals increases as people get older. Studies comparing identical and fraternal twins, published in the past decade by a group led by Thomas J. Bouchard, Jr., of the University of Minnesota and other scholars, show that about 40 percent of IQ differences among preschoolers stems from genetic differences but that heritability rises to 60 percent by adolescence and to 80 percent by late adulthood. With age, differences among individuals in their developed intelligence come to mirror more closely their genetic differences. It appears that the effects of environment on intelligence fade rather than grow with time. In hindsight, perhaps this should have come as no surprise. Young children have the circumstances of their lives imposed on them by parents, schools and other agents of society, but as people get older they become more independent and tend to seek out the life niches that are most congenial to their genetic proclivities.
A second big surprise for intelligence experts was the discovery that environments shared by siblings have little to do with IQ. Many people still mistakenly believe that social, psychological and economic differences among families create lasting and marked differences in IQ. Behavioral geneticists refer to such environmental effects as "shared" because they are common to siblings who grow up together. Research has shown that although shared environments do have a modest influence on IQ in childhood, their effects dissipate by adolescence. The IQs of adopted children, for example, lose all resemblance to those of their adoptive family members and become more like the IQs of the biological parents they have never known. Such findings suggest that siblings either do not share influential aspects of the rearing environment or do not experience them in the same way. Much behavioral genetics research currently focuses on the still mysterious processes by which environments make members of a household less alike.
Although the evidence of genetic and physiological correlates of g argues powerfully for the existence of global intelligence, it has not quelled the critics of intelligence testing. These skeptics argue that even if such a global entity exists, it has no intrinsic functional value and becomes important only to the extent that people treat it as such: for example, by using IQ scores to sort, label and assign students and employees. Such concerns over the proper use of mental tests have prompted a great deal of research in recent decades. This research shows that although IQ tests can indeed be misused, they measure a capability that does in fact affect many kinds of performance and many life outcomes, independent of the tests' interpretations or applications. Moreover, the research shows that intelligence tests measure the capability equally well for all native-born English-speaking groups in the U.S.
If we consider that intelligence manifests itself in everyday life as the ability to deal with complexity, then it is easy to see why it has great functional or practical importance. Children, for example, are regularly exposed to complex tasks once they begin school. Schooling requires above all that students learn, solve problems and think abstractly. That IQ is quite a good predictor of differences in educational achievement is therefore not surprising. When scores on both IQ and standardized achievement tests in different subjects are averaged over several years, the two averages correlate as highly as different IQ tests from the same individual do. High-ability students also master material at many times the rate of their low-ability peers. Many investigations have helped quantify this discrepancy. For example, a 1969 study done for the U.S. Army by the Human Resources Research Office found that enlistees in the bottom fifth of the ability distribution required two to six times as many teaching trials and prompts as did their higher-ability peers to attain minimal proficiency in rifle assembly, monitoring signals, combat plotting and other basic military tasks. Similarly, in school settings the ratio of learning rates between "fast" and "slow" students is typically five to one.
The scholarly content of many IQ tests and their strong correlations with educational success can give the impression that g is only a narrow academic ability. But general mental ability also predicts job performance, and in more complex jobs it does so better than any other single personal trait, including education and experience. The army's Project A, a seven-year study conducted in the 1980s to improve the recruitment and training process, found that general mental ability correlated strongly with both technical proficiency and soldiering in the nine specialties studied, among them infantry, military police and medical specialist. Research in the civilian sector has revealed the same pattern. Furthermore, although the addition of personality traits such as conscientiousness can help hone the prediction of job performance, the inclusion of specific mental aptitudes such as verbal fluency or mathematical skill rarely does. The predictive value of mental tests in the work arena stems almost entirely from their measurement of g, and that value rises with the complexity and prestige level of the job.
Half a century of military and civilian research has converged to draw a portrait of occupational opportunity along the IQ continuum. Individuals in the top 5 percent of the adult IQ distribution (above IQ 125) can essentially train themselves, and few occupations are beyond their reach mentally. Persons of average IQ (between 90 and 110) are not competitive for most professional and executive-level work but are easily trained for the bulk of jobs in the American economy. In contrast, adults in the bottom 5 percent of the IQ distribution (below 75) are very difficult to train and are not competitive for any occupation on the basis of ability. Serious problems in training low-IQ military recruits during World War II led Congress to ban enlistment from the lowest 10 percent (below 80) of the population, and no civilian occupation in modern economies routinely recruits its workers from that range. Current military enlistment standards exclude any individual whose IQ is below about 85.
The importance of g in job performance, as in schooling, is related to complexity. Occupations differ considerably in the complexity of their demands, and as that complexity rises, higher g levels become a bigger asset and lower g levels a bigger handicap. Similarly, everyday tasks and environments also differ significantly in their cognitive complexity. The degree to which a person's g level will come to bear on daily life depends on how much novelty and ambiguity that person's everyday tasks and surroundings present and how much continual learning, judgment and decision making they require. As gamblers, employers and bankers know, even marginal differences in rates of return will yield big gains--or losses--over time. Hence, even small differences in g among people can exert large, cumulative influences across social and economic life.
In my own work, I have tried to synthesize the many lines of research that document the influence of IQ on life outcomes. As the illustration shows, the odds of various kinds of achievement and social pathology change systematically across the IQ continuum, from borderline mentally retarded (below 70) to intellectually gifted (above 130). Even in comparisons of those of somewhat below average (between 76 and 90) and somewhat above average (between 111 and 125) IQs, the odds for outcomes having social consequence are stacked against the less able. Young men somewhat below average in general mental ability, for example, are more likely to be unemployed than men somewhat above average. The lower-IQ woman is four times more likely to bear illegitimate children than the higher-IQ woman; among mothers, she is eight times more likely to become a chronic welfare recipient. People somewhat below average are 88 times more likely to drop out of high school, seven times more likely to be jailed and five times more likely as adults to live in poverty than people of somewhat above-average IQ. Below-average individuals are 50 percent more likely to be divorced than those in the above-average category.
These odds diverge even more sharply for people with bigger gaps in IQ, and the mechanisms by which IQ creates this divergence are not yet clearly understood. But no other single trait or circumstance yet studied is so deeply implicated in the nexus of bad social outcomes--poverty, welfare, illegitimacy and educational failure--that entraps many low-IQ individuals and families. Even the effects of family background pale in comparison with the influence of IQ. As shown most recently by Charles Murray of the American Enterprise Institute in Washington, D.C., the divergence in many outcomes associated with IQ level is almost as wide among siblings from the same household as it is for strangers of comparable IQ levels. And siblings differ a lot in IQ--on average, by 12 points, compared with 17 for random strangers.
An IQ of 75 is perhaps the most important threshold in modern life. At that level, a person's chances of mastering the elementary school curriculum are only 50-50, and he or she will have a hard time functioning independently without considerable social support. Individuals and families who are only somewhat below average in IQ face risks of social pathology that, while lower, are still significant enough to jeopardize their well-being. High-IQ individuals may lack the resolve, character or good fortune to capitalize on their intellectual capabilities, but socioeconomic success in the postindustrial information age is theirs to lose.
The foregoing findings on g's effects have been drawn from studies conducted under a limited range of circumstances--namely, the social, economic and political conditions prevailing now and in recent decades in developed countries that allow considerable personal freedom. It is not clear whether these findings apply to populations around the world, to the extremely advantaged and disadvantaged in the developing world or, for that matter, to people living under restrictive political regimes. No one knows what research under different circumstances, in different eras or with different populations might reveal.
But we do know that, wherever freedom and technology advance, life is an uphill battle for people who are below average in proficiency at learning, solving problems and mastering complexity. We also know that the trajectories of mental development are not easily deflected. Individual IQ levels tend to remain unchanged from adolescence onward, and despite strenuous efforts over the past half a century, attempts to raise g permanently through adoption or educational means have failed. If there is a reliable, ethical way to raise or equalize levels of g, no one has found it.
Some investigators have suggested that biological interventions, such as dietary supplements of vitamins, may be more effective than educational ones in raising g levels. This approach is based in part on the assumption that improved nutrition has caused the puzzling rise in average levels of both IQ and height in the developed world during this century. Scientists are still hotly debating whether the gains in IQ actually reflect a rise in g or are caused instead by changes in less critical, specific mental skills. Whatever the truth may be, the differences in mental ability among individuals remain, and the conflict between equal opportunity and equal outcome persists. Only by accepting these hard truths about intelligence will society find humane solutions to the problems posed by the variations in general mental ability.
Related Links
IQ Tests on the WWW: Web Directory
Intelligence and Personality Assessment: A Study Guide by Jon Potter
IQ: A Structure for Understanding by Timothy Bates, Macquarie University Sydney
Great Ideas in Personality -- Intelligence by G. Scott Acton, Northwestern University
Intelligence and IQ: Book reviews, commentaries and links to other Net resources.
The Author
LINDA S. GOTTFREDSON is professor of educational studies at the University of Delaware, where she has been since 1986, and co-directs the Delaware-Johns Hopkins Project for the Study of Intelligence and Society. She trained as a sociologist, and her earliest work focused on career development. "I wasn't interested in intelligence per se," Gottfredson says. "But it suffused everything I was studying in my attempts to understand who was getting ahead." This "discovery of the obvious," as she puts it, became the focus of her research. In the mid-1980s, while at Johns Hopkins University, she published several influential articles describing how intelligence shapes vocational choice and self-perception. Gottfredson also organized the 1994 treatise "Mainstream Science on Intelligence," an editorial with more than 50 signatories that first appeared in the Wall Street Journal in response to the controversy surrounding publication of The Bell Curve. Gottfredson is the mother of identical twins--a "mere coincidence," she says, "that's always made me think more about the nature and nurture of intelligence." The girls, now 16, follow Gottfredson's Peace Corps experience of the 1970s by joining her each summer for volunteer construction work in the villages of Nicaragua.
This "full excerpt" is provided as background reading, for anyone who wants to better understand the scientific basis for intelligence research. It is hosted at the U. of Toronto Psychology Department, along with a wide range of links and reprints dealing with many aspects of human intelligence -- both real and fanciful. Thanks to Professor E. Reingold at U of T.
The Al Fin blog is focused more on the prospects for augmenting human intelligence, than in assessing it. But there is a mountain of ignorance and obfuscation that has been put in the way of most interested observers -- by academics and intellectuals who are threatened by what is being learned in the intelligence field.
If we are to ever be in a position to improve the IQ of most persons for the sake of creating a better world, we must confront the reality of human intelligence with open minds and open eyes. Anything less is to doom the future inhabitants of Earth to lifelong stupidity and idiocracy.
Labels: IQ
3 Comments:
Before my comment about the article, an observation that the title link to this post appears to be missing.
The nature versus nurture aspect of IQ has long been debated. My personal bias, based on personal observations, is that nature is far more important. While educational opportunities [including travel and social interactions] play a significant role in specific skills, intelligence manifests itself in the ability to learn, regardless of other learned skills.
One can see this easily in young children. Some children are quick mimics who master skills and social interactions with ease. Other children appear lost in a fog and learn only through constant repetition. Those differences carry over to language, music, art, science, and other critical social intelligences.
I agree that intelligence is just one natural variable... but it is a natural variable.
When people talk about emotional intelligence, are they not really talking about executive function? It seems to me that emotional intelligence as something separate and distinct from executive function is wishy-washy and, therefor, spurious.
Thanks for your comments Bruce and Kurt.
This topic is so fascinating because neither genes nor environment can operate alone. Instead they function as a dual whirlwind. The politically correct social sciences particularly grossly neglect the genes. Funding agencies of government and private foundations take their cues from the PC Thought Police, so vital research on genes and cognition is not funded.
Emotional intelligence is poorly defined in terms of brain activity. Executive function centers on the prefrontal cortex and decision making / judgment. There is considerable overlap in terms of the behaviours that each is supposed to influence.
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home