The following section (pages 35-42) of The Element presents a fascinating, if disconcerting, look at the origins and uses of intelligence testing:
Another thing I do when I speak to groups is to ask people to rate their intelligence on a 1-to-10 scale, with 10 being the top. Typically, one or two people will rate themselves a 10. When these people raise their hands, I suggest that they go home; they have more important things to do than listen to me.
Beyond this, I’ll get a sprinkling of 9s and a heavier concentration of 8s. Invariably, though, the bulk of any audience puts itself at 7 or 6. The responses decline from there, though I admit I never actually complete the survey. I stop at 2, preferring to save anyone who would actually claim an intelligence level of 1 the embarrassment of acknowledging it in public. Why do I always get the bell-shaped curve? I believe it is because we’ve come to take for granted certain ideas about intelligence.
What’s interesting is that most people do put their hands up and rate themselves on this question. They don’t seem to see any problem with the question itself and are happy to put themselves somewhere on the scale. Only a few have challenged the form of the question and asked what I mean by intelligence. I think that’s what everyone should do. I’m convinced that taking the definition of intelligence for granted is one of the main reasons why so many people underestimate their true intellectual abilities and fail to find their Element.
This commonsense view goes something like this: We are all born with a fixed amount of intelligence. It’s a trait, like blue or green eyes, or long or short limbs. Intelligence shows itself in certain types of activity, especially in math and our use of words. It’s possible to measure how much intelligence we have through pencil-and-paper tests, and to express this as a numerical grade. That’s it.
Put as bluntly as this, I trust this definition of intelligence sounds as questionable as it is. But essentially this definition runs through much of Western culture, and a good bit of Eastern culture as well. It is at the heart of our education systems and underpins a good deal of the multibillion-dollar testing industries that feed off public education throughout the world. It’s at the heart of the idea of academic ability, dominates college entrance examinations, underpins the hierarchy of subjects in education, and stands as the foundation for the whole idea of IQ.
This way of thinking about intelligence has a long history in Western culture and dates back at least to the days of the great Greek philosophers, Aristotle and Plato. Its most recent flowering was in the great period of intellectual advances of the seventeenth and eighteenth centuries that we know as the Enlightenment. Philosophers and scholars aimed to establish a firm basis for human knowledge and to end the superstitions and mythologies about human existence that they believed had clouded the minds of previous generations.
One of the pillars of this new movement was a firm belief in the importance of logic and critical reasoning. Philosophers argued that we should not accept as knowledge anything that could not be proved through logical reasoning, especially in words and mathematical proofs. The problem was where to begin this process without taking anything for granted that might be logically questionable. The famous conclusion of the philosopher Rene Descartes was that the only thing that he could take for granted was his own existence; otherwise, he couldn’t have these thoughts in the first place. His thesis was, “I think, therefore I am.”
The other pillar of the Enlightenment was a growing belief in the importance of evidence in support of scientific ideas – evidence that one could observe through the human senses – rather than superstition or hearsay. These two pillars of reason and evidence became the foundations of an intellectual revolution that transformed the outlook and achievements of the Western world. It led to the growth of the scientific method and an avalanche of insights, analysis, and classification of ideas, objects, and phenomena that have extended the reach of human knowledge to the depths of the earth and to the far ends of the known universe. It led too to the spectacular advances in practical technology that gave rise to the Industrial Revolution and to the supreme domination of these forms of thought in scholarship, in politics, in commerce, and in education.
The influence of logic and evidence extended beyond the ‘hard’ sciences. They also shaped the formative theories in the human sciences, including psychology, sociology, anthropology, and medicine. As public education grew in the nineteenth and twentieth centuries, it too was based on these newly dominant ideas about knowledge and intelligence. As mass education grew to meet the growing demands of the Industrial Revolution, there was also a need for quick and easy forms of selection and assessment. The new science of psychology was on hand with new theories about how intelligence could be tested and measured. For the most part, intelligence was defined in terms of verbal and mathematical reasoning. These were also processes that were used to quantify the results. The most significant idea in the middle of all this was IQ.
So it is that we came to think of real intelligence in terms of logical analysis: believing that rationalist forms of thinking were superior to feeling and emotion, and that the ideas that really count can be conveyed in words or through mathematical expressions. In addition, we believed that we could quantify intelligence and rely on IQ tests and standardized tests like the SAT to identify who among us is truly intelligent and deserving of exalted treatment.
Ironically, Alfred Binet, one of the creators of the IQ test, intended the test to serve precisely the opposite function. In fact, he originally designed it (on commission from the French government) exclusively to identify children with special needs so they could get appropriate forms of schooling. He never intended it to identify degrees of intelligence or ‘mental worth.’ In fact, Binet noted that the scale he created ‘does not permit the measure of intelligence, because intellectual qualities are not superposable, and therefore cannot be measured as linear surfaces are measured.’
Nor did he ever intend it to suggest that a person could not become more intelligent over time. ‘Some recent thinkers,’ he said, ‘[have affirmed] that an individual’s intelligence is a fixed quantity, a quantity that cannot be increased. We must protest and react against this brutal pessimism; we must try to demonstrate that it is founded on nothing.’
Still, some educators and psychologists took – and continue to take – IQ numbers to absurd lengths. In 1916, Lewis Terman of Stanford University published a revision of Binet’s IQ test. Known as the Stanford-Binet test, now in its fifth version, it is the basis of the modern IQ test. It is interesting to note, though, that Terman had a sadly extreme view of human capacity. These are his words, from the textbook The Measurement of Intelligence: ‘Among laboring men and servant girls there are thousands like them feebleminded. They are the world’s “hewers of wood and drawers of water.” And yet, as far as intelligence is concerned, the tests have told the truth . . . No amount of school instruction will ever make them intelligent voters or capable voters in the true sense of the word.’
Terman was an active player in one of the darker stages of education and public policy, one there is a good chance you are unaware of because most historians choose to leave it unmentioned, the way they might a crazy aunt or an unfortunate drinking incident in college. The eugenics movement sought to weed out entire sectors of the population by arguing that such traits as criminality and pauperism were hereditary, and that it was possible to identify these traits through intelligence testing. Perhaps most appalling among the movement’s claims was the notion that entire ethnic groups, including southern Europeans, Jews, Africans, and Latinos fell into such categories. ‘The fact that one meets this type with such frequency among Indians, Mexicans, and Negroes suggests quite forcibly that the whole question of racial differences in mental traits will have to be taken up anew and by experimental methods,’ Terman wrote.
‘Children of this group should be segregated in special classes and be given instruction which is concrete and practical. They cannot master, but they can often be made efficient workers, able to look out for themselves. There is no possibility at present of convincing society that they should not be allowed to reproduce, although from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.’
The movement actually managed to succeed in lobbying for the passage of involuntary sterilization laws in thirty American states. This meant that the state could neuter people who fell below a particular IQ without their having any say in the matter. That each state eventually repealed the laws is a testament to common sense and compassion. That the laws existed in the first place is a frightening indication of how dangerously limited any standardized test is in calculating intelligence and the capacity to contribute to society.
IQ tests can even be a matter of life and death. A criminal who commits a capital offense is not subject to the death penalty if his IQ is below seventy. However, IQ scores regularly rise over the course of a generation (by as much as twenty-five points), causing the scale to be reset every fifteen to twenty years to maintain a mean score of one hundred. Therefore, someone who commits a capital offense may be more likely to be put to death at the beginning of a cycle than at the end. That’s giving a single test an awful lot of responsibility.
People can also improve their scores through study and practice. I read a case recently about a death row inmate who’d at that point spent ten years in jail on a life sentence (he wasn’t the trigger man, but he’d been involved in a robbery where someone died). During his incarceration, he took a series of courses. When re-tested, his IQ had risen more than ten points – suddenly making him eligible for execution.
Of course, most of us won’t ever be in a situation where we’re sterilized or given a lethal injection because of our IQ scores. But looking at these extremes allows us to ask some important questions, namely, What are these numbers? and, What do they truly say about our intelligence? The answer is that the numbers largely indicate a person’s ability to perform on a test of certain sorts of mathematical and verbal reasoning. In other words, they measure some types of intelligence, not the whole of intelligence. And, as noted above, the baseline keeps shifting to accommodate improvements in the population as a whole over time.
Our fascination with IQ is a corollary to our fascination with – and great dependence on – standardized testing in our schools. Teachers spend large chunks of every school year preparing their students for statewide tests that will determine everything from the child’s placement in classes the following year to the amount of funding the school will receive. These tests of course do nothing to take the child’s (or the school’s) special skills and needs into consideration, yet they have a tremendous say in the child’s scholastic fate.
The standardized test that currently has the most impact on a child’s academic future in America is the SAT. Interestingly, Carl Brigham, the inventor of the SAT, was also a eugenicist. He conceived the test for the military and, to his credit, disowned it five years later, rejecting eugenics at the same time. However, by this point, Harvard and other Ivy League schools had begun to use it as a measure of applicant acceptability. For nearly seven decades, most American colleges have used it (or the similar ACT) as an essential part of their screening processes, though some colleges are beginning to rely upon it less.
The SAT is in many ways the indicator for what is wrong with standardized tests: it only measures a certain kind of intelligence; it does it in an entirely impersonal way; it attempts to make common assumptions about the college potential of a hugely varied group of teenagers in one-size-fits-all fashion; and it drives high school juniors and seniors to spend hundreds of hours preparing for it at the expense of school study or the pursuit of other passions. John Katzman, founder of the Princeton Review, offers this stinging criticism: ‘What makes the SAT bad is that it has nothing to do with what kids learn in high school. As a result, it creates a sort of shadow curriculum that furthers the goals of neither educators nor students . . . The SAT has been sold as snake oil; it measured intelligence, verified high school GPA, and predicted college grades. In fact, it’s never done the first two at all, nor a particularly good job at the third.’
Yet students who don’t test well or who aren’t particularly strong at the kind of reasoning the SAT assesses can find themselves making compromises on their collegiate futures – all because we’ve come to accept that intelligence comes with a number. This notion is pervasive, and it extends well beyond academia. Remember the bell-shaped curve we discussed earlier? It presents itself every time I ask people how intelligent they think they are because we’ve come to define intelligence far too narrowly. We think we know the answer to the question, ‘How intelligent are you?’ The real answer, though, is that the question itself is the wrong one to ask.