Morons in Africa
In 1994, with Richard Herrnstein, Charles Murray brought out The Bell Curve, subtitled "Intelligence and Class Structure in American Life." It sold well and soon came out in paperback with Simon & Schuster. Indeed, the acquisitions editor at S&S who bought The Bell Curve next bought my bestseller, Lies My Teacher Told Me. I know this because she bragged to me about having done so. She wanted me to infer that she was a big-league acquisitions editor, worthy of my book, since she had acquired Murray's book. I had already bought The Bell Curve in hardbound and was teaching a course that considered it at length; her acquisition did not excite me as much as she'd hoped.
The Bell Curve makes an astonishing claim about the 700,000,000 people in Africa (its approximate population when the book came out -- now nearing 900,000,000). Herrnstein and Murray (hereafter "Murray") cite studies, particularly by Richard Lynn, showing "the median black African IQ to be 75, approximately 1.7 standard deviations below the U.S. overall population average, about 10 points lower than the current figure for American blacks." (Bell Curve, 289)
Probably most readers know that "normal" or "average" IQ is the range between 90 to 110. Herrnstein is right that 75 is about 1.7 points below the U.S. average. Some researchers consider 75-85 "mentally retarded." The Wechsler Adult Intelligence Scale (WAIS), the most widely used IQ test in the U.S., classifies 75 as "borderline." Such people have about a 50/50 chance of reaching high school, according to research cited by J.C. Loehlin, et al., Race differences in intelligence (San Francisco: W.H. Freeman, 1975). During and after World War II, the army rejected draftees who scored 75 or less. Today Social Security considers 75 to support a finding that a claimant is disabled, in conjunction with other factors.
So Herrnstein and Murray claimed that the average African is borderline disabled in intelligence. Just below Forrest Gump, whose IQ was 76 (hence he could serve in the Army). The reason African Americans have a higher IQ, according to The Bell Curve, is because their genetics benefit from white admixture.
When The Bell Curve came out, I had never been to Africa. Since then, I have (twice). In November of 2003, I heard Charles Murray give a book talk about a new book, Human Accomplishment. (It tries to explain the clustering of "geniuses" or major contributors to culture, in time and space). I asked this question about his earlier work:
I have read and taught from The Bell Curve. In it, as you know, you say that African Americans have IQs 15 points lower than white Americans, while black Africans have IQs 25 points lower than whites, averaging 75, which is borderline between dull normal and what used to be termed moron. Since then, I traveled to Africa. Now, borderline morons are noticeable, you know? And I just didn't find that the average person in Guinea, the country where I was, was a borderline moron. So my question is: have you been to Africa? And if so, did you find the average African to be 25 points below what we consider a "normal" IQ? And if not, how do you reconcile that with your claim in The Bell Curve?
Murray replied that further studies of IQ in Africa since The Bell Curve have further confirmed his conclusion that they average 25 or even 30 points lower. But they are not lower in what he called "social IQ," which he defined as the ability to function in society.
I felt like a giant hole had opened up before my feet, right in the bookstore. Neither "social IQ" nor "social intelligence" appear in The Bell Curve's index nor, to my memory, in its text. On the contrary, in The Bell Curve Murray professes strong belief that IQ tests measure g, general cognitive ability, "whatever it is that people mean when they use the word [sic] intelligent or smart in ordinary language." (p. 22) Relying on Charles Spearman, who invented important statistical measures around 1900, Murray posited that g is "a general capacity for inferring and applying relationships drawn from experience." Since other questioners waited behind me for the microphone, I did not reply. However, "social IQ" represented a dramatic retreat from his reliance on g.
In Africa, I saw people flying airplanes, driving cars, running stores, teaching school -- performing, in short, the range of occupations people do in the United States. These people were Africans. People with borderline intelligence could not perform many of these tasks. Furthermore, all of them required "a general capacity for inferring and applying relationships drawn from experience." Arthur Jensen, who died last month, likewise emphasized g, which he likewise termed "general intelligence."
Conversely, if Murray is right to claim that Africans have normal "social intelligence" but are borderline in g or general intelligence, it follows logically that g must not relate to such tasks as flying airplanes, driving cars, running stores, and teaching school.
To put it another way, such a retreat by Murray invalidates his entire theory.
As well, Forrest Gumps are noticeable. I did not notice any in Guinea, Ghana, Burkina Faso, or Mali. Yet Murray holds that half of all Africans have lower intelligence than Forrest Gump!
The Bell Curve goes on to state, "IQ scores are stable." (p. 23) Famous research done as long ago as the 1960s shows otherwise. In Pygmalion in the Classroom, Robert Rosenthal and Lenore Jacobson showed that students in first grade in San Francisco gained an average of 27 points in IQ in one year. They had generated such gains simply by "leaking" to first-grade teachers the names of students in their classes who had supposedly excelled at a "Harvard Test of Inflected Acquisition," said to predict which youngsters were about to "spurt"! Murray would claim this could not be: "Changing cognitive ability through environmental interventions has proved to be extraordinarily difficult." (p. 314) Rosenthal and Jacobson had provoked enormous gains by a short and seemingly minor intervention, undercutting the claim of difficulty. Gains of 27 points in one year similarly undercut the claim of stability.
More recently, a French study summarized by David Kirp in 2006 looked at poor French kids who were adopted between age 4 and 6. Their IQs had been tested in the orphanage and found to be in, shall we say, Herrnstein's "African range": they averaged 77, "nearly retarded." To the researchers, this reflects the abuse and neglect they had suffered as infants, then being "shunted from one foster home or institution to the next" as toddlers. I don't know what Murray would say. Nine years later, after being adopted by farmers and laborers, they averaged 88.5 -- real improvement. If averaged by middle-class families, they averaged 92. And if averaged by upper-class, the children averaged 98, a 21-point gain. Kirp concludes, "that is a huge difference... and it can only be explained by pointing to variations in family circumstances."
Again, Murray would claim this could not be, since "cognitive ability," which he says IQ tests measure, "is substantially heritable, apparently no less than 40 percent and no more than 80 percent." Moreover, "whatever variation is left over for the environment to explain ..., relatively little can be traced to the shared environments created by families." (pp. 23, 108) Since the orphans hardly received new genes when they got new families, gains of 21 points undercut Murray's claims of high genetic heritability and low influence by families. Since the Rosenthal and Jacobson first-graders hardly received gene transplants during the school year, their 21-point gains also undercut the claim of high genetic heritability.
Last month, an article by David Dobbs in the New York Times, "If Smart Is the Norm, Stupidity Gets More Interesting," went at this problem from a different position. Dobbs points out that so far, trying to find the genes responsible for intelligence has proven futile.
Researchers have tried hard to find [the key to intelligence] in our genes. With the rise of inexpensive genome sequencing, they've analyzed the genomes of thousands of people, looking for gene variants that clearly affect intelligence, and have found a grand total of two. One determines the risk of Alzheimer's and affects IQ only late in life; the other seems to build a bigger brain, but on average it raises IQ by all of 1.29 points.
Dobbs goes on to note, "A report last year concluded that several hundred gene variants taken together seemed to account for 40 to 50 percent of the difference in intelligence among the 3,500 subjects in the study. But the authors couldn't tell which of these genes created any significant effect. And when they tried to use the genes to predict differences in intelligence, they could account for only 1 percent of the differences in IQ." He quotes Robert Plomin, a professor of behavioral genetics: "If it's this hard to find an effect of just 1 percent, what you're really showing is that the cup is 99 percent empty."
Dobbs's piece sparked this essay. I cannot here do justice to the complexities of the IQ literature, the expectancy effect, or the reasons why Africans might not score well on the WAIS. (I do touch on these matters in Chapter 2 of Teaching What Really Happened. Readers might also examine The Validity of Testing in Education and Employment, a 1993 report of the U.S. Civil Rights Commission.) However, the good news, I believe, is that environment makes a huge difference.
The bad news is, of course, American children grow up in families that differ much more than do the French families Dobbs summarized above. That's partly because the U.S. has more social stratification than any other industrialized nation. In their pathbreaking study, Meaningful Differences in the Everyday Experience of Young American Children, Betty Hart and Todd Risley show that "by the time they are four years old, children growing up in poor families have typically heard a total of 32,000,000 fewer spoken words than those whose parents are professionals." Answers to this gap include early Head Start programs, free preschools, and various kinds of advice and assistance to poor parents.
Otherwise, as it stands now, IQ differs dramatically by social class, which in turn allows the rich to cite these IQ differences as justification for our wide gaps in income and schooling, which then create further differences in IQ. Authors like Charles Murray play a crucial part in maintaining this circular process, which seems perfectly logical at any juncture until examined more carefully.
Copyright James Loewen
comments powered by Disqus
- High on Hitler and Meth: Book Says Nazis Were Fueled by Drugs
- Guam war reparations bill moves to White House
- South Atlantic Mystery Flash in September 1979 Raised Questions about Nuclear Test
- California Owes Reparations To Victims Of Forced Race & Intellectual-Based Sterilization, Study Finds
- All the times in U.S. history that members of the electoral college voted their own way
- Historians' Debate: Is this The Age of Trump?
- Economists are attacking historians’ recent works on slavery
- Salon suggests Paul Gottfried, "a retired Jewish political historian,” was a founder of the Alt-Right
- National Women's History Museum Receives Grant to Rebuild Website with Advanced Content Capabilities
- UCLA history professor Gabriel Piterberg continues to come under attack after being accused of sexual harassment