The New Science of Who We Are, How We Got Here, and Where We're Going
by Michael Muthukrishna
The following essay is based on the excerpts adapted from the author’s most recent book published by the MIT Press
Our society is obsessed with intelligence. What it is, what it predicts, how to measure it, how to improve it, who’s smarter, whether it’s caused by nature or nurture, and so on. This obsession is in part due to the implicit or explicit assumption that behind the laws of energy and innovation, and perhaps even the law of cooperation, is intelligence. That it is thanks to human intelligence that we controlled fire and fossil fuels, innovated amazing new technologies, and learned that it was better to work together than to fight and cheat. The truth is more complicated.
[Josh Riemer/Unsplash] |
Intelligence is essential to who we are and how we got here. But our intelligence didn’t lead us to control energy, innovate fantastical technologies, and learn to cooperate. Rather, controlling energy, innovating fantastical technologies, and learning to cooperate led us to become more intelligent. Intelligence, in other words, is not the cause of the laws of life; like our cultural package, it is another result of these laws. It is more a product of our quickly evolving software than our slowly evolving hardware.
But intelligence is a tricky thing to define, let alone measure or study. Attempts to study intelligence have a long history.
Francis Galton was Charles Darwin’s cousin – both were grandchildren of the eminent Erasmus Darwin. Erasmus was an important historical figure in the English Midlands Enlightenment of the second half of the eighteenth century – a physician, philosopher, biologist, and poet who had died seven years before the birth of Charles (in 1809) and twenty years before the birth of Francis (in 1822). Nevertheless, his fame loomed large over their childhoods.
Francis’s almost religious obsession with the inheritance of intelligence and hereditary genius began after he read his cousin’s book, On the Origin of Species (1859). He was inspired and began to think through the implications for humans. He was perhaps also inspired by Charles and Erasmus – two apparent geniuses in the same family. Francis began to rigorously study what he called eugenics: the scientific study of eugenes – Greek for ‘well-born, of good stock, of noble race’.
Charles Darwin wasn’t much of a mathematician, writing in his autobiography: ‘I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics, for men thus endowed seem to have an extra sense.’ He might have been thinking of the young Francis, who took a mathematical approach to studying almost everything, including evolutionary questions. Francis worked on the principle of ‘whenever you can, count’. One of his goals was disentangling the contributions of what he called ‘nature versus nurture’.
In 1865 Francis laid out his eugenic agenda in an article entitled ‘Hereditary Talent and Character’, writing that ‘The power of man over animal life, in producing whatever varieties of form he pleases, is enormously great . . . It is my desire to show . . . that mental qualities are equally under control’. He looked at biographies of ‘eminent people’ and started counting, and found that although ‘the children of men of genius are frequently of mediocre intellect’, ‘talent is transmitted by inheritance in a very remarkable degree’. But documenting this apparent inheritance of intelligence wasn’t sufficient; he wanted to apply it.
Surely, Francis thought, if we could breed animals for certain traits, could we not also increase the intelligence of a society? Or as he put it, ‘If a twentieth part of the cost and pains were spent in measures for the improvement of the human race that is spent on the improvement of the breed of horses and cattle, what a galaxy of genius might we not create!’ And so began Galton’s quest, which culminated in 1869 with Hereditary Genius, where he proposed policies such as arranging marriages between the ‘wealthy’ and the ‘distinguished’.
Galton promoted his vision for eugenics with a religious fervor, writing in his autobiography, ‘I take Eugenics very seriously, feeling that its principles ought to become one of the dominant motives in a civilized nation, much as if they were one of its religious tenets.’ Galton’s vision would become reality.
In the early twentieth century eugenics was a prominent scientific subject, supported in various forms by the most prominent scientists, politicians, and thought leaders of the time. This wasn’t some fringe idea; it was endorsed and implemented in different forms and with different levels of coercion. Eugenics didn’t cause the immediate revulsion many feel today. In fact, it was perceived as part of a progressive agenda – a way to make the world a better place.
‘Positive eugenics’ involved promoting ‘good stock’ (such as Francis’s aforementioned proposed arranged marriages). ‘Negative eugenics’ involved preventing the spread of ‘defective stock’ (such as restricting immigration and forced sterilization of those with ‘lower intelligence’).
The Nazis’ obsession with eugenics and their horrifying subsequent policies designed to achieve an Aryan ideal led to a decline of overt support for the idea. But eugenics lives on in various forms: population control, sex-selective abortions, prenatal gene testing, and most recently gene-based embryo selection.
Although Galton was able to convince his peers that ‘talent and character’ were transmitted in families and that eugenics was a progressive pursuit, he was ultimately unable to directly measure the thing he was trying to improve: intelligence. This must have been frustrating for a man who loved to count, but he lived long enough to see the first widely adopted intelligence test, published in 1905.
Measuring how clever you are
In the early twentieth century the French Ministry of Education wanted to measure how students compared to their same-age peers so that those who were arriéré – behind or backward – could be given special education. They tasked two psychologists – Alfred Binet and Théodore Simon – with developing a test. The Binet–Simon test, published in 1904 as ‘Méthodes nouvelles pour le diagnostic du niveau intellectuel des anormaux’ and later translated as ‘New Methods for the Diagnosis of the Intellectual Level of Subnormals’, was presented at a conference in Rome in 1905 under the title, ‘Méthodes nouvelles pour diagnostiquer l’idiotie, l’imbécillité et la débilité mentale’ – ‘New Methods for Diagnosing the Idiot, the Imbecile, and the Moron’. It was the first IQ test.
The three terms that emerged in the literature were not meant to be insulting but rather scientific classifications grounded in Greek or Latin roots:
- ‘Moron’ came from the Greek moros, meaning ‘foolish’, and designated a child who might fail to pay attention or fail to answer some harder questions.
- ‘Imbecile’ came from the Latin imbeccilus, meaning ‘weak’ or ‘feeble’, and designated a child who gave absurd responses, perhaps not correctly identifying an object.
- ‘Idiot’, the bottom rung, came from the Latin idiota, meaning ‘ignorant person’, and designated a child who didn’t know common objects, for example confusing a piece of chocolate with a piece of wood and trying to eat both or neither.
These words are offensive today, and as such are an example of what psychologist Steven Pinker has called the ‘euphemism treadmill’ whereby euphemisms (indirect, milder words for unpleasant referents) become dysphemisms (derogatory words) due to the underlying reality of what they refer to. The once benign term ‘retard’, which means ‘delayed’, has become taboo for similar reasons: ‘retard’ is related to the inoffensive word ‘tardy’, both from the Latin tardus, meaning ‘slow’.
The Binet–Simon test was later revised and translated into English by Stanford psychologist Lewis Terman in what was called the Stanford–Binet Intelligence Scales. Terman, who was a prominent eugenicist (pre-Nazi, when it was still considered acceptable), also adopted William Stern’s idea of standardizing the scores as an Intelligence Quotient, introducing the concept of IQ.
You may remember from high school that a quotient is a fraction or division. For an intelligence quotient, a score of 100 means a performance at the average compared to same-age peers. Every fifteen points higher or lower represents one standard deviation from this average. So, for example, an IQ below 70 (two standard deviations below the mean) would put someone in the bottom 2.5% of the population. An IQ of 145 (three standard deviations above the mean) would put someone in the top 0.1% of the population.
What did these tests look like?
The IQ tests measured a grab bag of concepts that researchers felt all children should know, from labeling objects that were commonplace in the early twentieth century to which of two crudely drawn faces was prettier. Remember, they were initially trying to identify ‘subnormal’ performance, as Binet and Simon put it. As the test spread beyond France, the culture-bound nature of these questions became apparent. Different societies have different common objects and speak different languages. Some didn’t learn math, reading, or writing at all. So attempts were made to remove the cultural element.
Raven’s Progressive Matrices IQ test was developed by John Raven in 1936 and is still often considered the most culture-free IQ test. The test has no words or numbers, only patterns to be solved. But Raven’s test, too, relies on artificial, two-dimensional shapes and patterns, for example squares and triangles, that are rare in nature and that we spend a lot of time training children in our society to identify. It wasn’t long before IQ tests were used on adults.
The American Psychological Association began testing soldiers during the First World War. Southern and eastern European migrants had lower scores than northern Europeans and these scores were taken at face value. This led to immigration restrictions in the spirit of negative eugenics. Were these IQ tests a fair measure of intelligence?
IQ is not the same as intelligence. IQ tests are to intelligence what inches and centimeters are to length. An object has a true length, but it can be quantified in different ways and on different scales. But unlike length, intelligence doesn’t have a clear, unambiguous, accepted definition. Indeed, the existence of a more general intelligence – referred to as g – rather than specific talents is statistical. What do I mean by this?
Say you’re trying to measure overall fitness, let’s call this f. You may care about overall fitness because of how it translates to athletic and sports performance, just as we often care about general intelligence because of how it translates to academic and work performance. But each sport is different, just as each school subject and each job is different. Nonetheless, there may be an overall fitness underlying these different specific athletic abilities. How would you know if this were true?
You might start by giving people a variety of fitness tests to measure different fundamental aspects of fitness: tests of endurance, strength, speed, flexibility, body composition, and so on. You might try the beep shuttle run test, one-rep max, 100-metre sprint time, heart rate after some set exercise, how many push-ups they can do, how far past their toes they can reach, and so on. If there were an overall fitness, you might expect some correlation between these scores – people who score high on flexibility might also be faster. Since you’re correlating multiple scores, you perform what’s called a factor analysis, which looks for an underlying or latent factor that tries to capture a kind of overall correlation between all scores as best it can – in this case, your hypothesized overall fitness f. This factor analysis also tells you how well each of your tests measure that overall correlation f factor, if it exists. In turn, you can see how f predicts performance in different sports.
This is exactly what was done to discover the hypothesized general intelligence g. Various tests for components of skills perceived to be related to fundamental intelligence, such as problem-solving, general knowledge, verbal and language ability, quantitative skills, visual–spatial processing, and working memory correlate with one another and have an underlying factor. Different tests and subtests can then be evaluated by how well they correlate with this underlying factor, g. And it is possible to look at how reliably one can measure g using these IQ tests, and how heritable it is. As it turns out, g is both reliably measured and reasonably heritable.
General intelligence, g, is almost taken for granted today among intelligence researchers. Some research focuses on explaining why these tests have an underlying factor. Could it be some undiscovered feature of our brains that differs between people (something like neural speed or efficiency)? Or perhaps there’s a ‘positive manifold’ such that being good at different things has synergies that help improve performance in different domains? For example, being good at reading, verbal and other language skills may help you learn better and therefore become better at problem-solving and quantitative skills. Over time, these different skills will then correlate and reinforce one another.
The idea of general intelligence rests not on theory or causal experiments but on the correlation between different tests deemed to measure cognitive ability. And debate continues over the degree to which this highly valued trait is nature versus nurture, as Galton put it. But, armed with our ‘theory of everyone’ we can cut through Galton’s debate, answer the eugenic question, and offer a more compelling and comprehensive, theoretical and empirical understanding of intelligence.
Let’s start with ten facts about IQ….
© 2023 Salgado Muthukrishna Consulting Ltd.
Post a Comment