And then a little lower on the page is a multiple-choice test that offers five four-digit numbers as the potential correct answer for each word.
All you have to do is find the right number from the key above and then check that box (1C, 2A, 3C, etc.). It’s a snap, if a somewhat mind-numbing one.
Segal located two large pools of data that included scores from thousands of young people on both the coding-speed test and a standard cognitive-skills test. One pool was the National Longitudinal Survey of Youth, or NLSY, a huge survey that began tracking a cohort of more than 12,000 young people in 1979. The other was a group of military recruits who took the coding exam as part of a range of tests they had to pass in order to be accepted into the U.S. Armed Forces. The high-school and college students who were part of the NLSY had no real incentive to exert themselves on the tests—the scores were for research purposes only and didn’t have any bearing on their academic records. For the recruits, though, the tests mattered very much; bad scores could keep them out of the military.
When Segal compared the scores of the two groups on each test, she found that on average, the high-school and college kids did better than the recruits on the cognitive tests. But on the coding-speed test, it was the recruits who did better. Now, that might have been because the kind of young person who chose to enlist in the armed forces was naturally gifted at matching numbers with words, but that didn’t seem too likely. What the coding-speed test really measured, Segal realized, was something more fundamental than clerical skill: the test takers’ inclination and ability to force themselves to care about the world’s most boring test. The recruits, who had more at stake, put more effort into the coding test than the NLSY kids did, and on such a simple test, that extra level of exertion was enough for them to beat out their more-educated peers.
Now, remember that the NLSY wasn’t just a one-shot test; it tracked young people’s progress afterward for many years. So next Segal went back to the NLSY data, looked at each student’s cognitive-skills score and coding-speed score in 1979, and then compared those two scores with the student’s earnings two decades later, when the student was about 40. Predictably, the kids who did better on the cognitive-skills tests were making more money. But so were the kids who did better on the super-simple coding test. In fact, when Segal looked only at NLSY participants who didn’t graduate from college, their coding-test scores were every bit as reliable a predictor of their adult wages as their cognitive-test scores. The high scorers on the coding test were earning thousands of dollars a year more than the low scorers.
And why? Does the modern American labor market really put such a high value on being able to compare mindless lists of words and numbers? Of course not. And in fact, Segal didn’t believe that the students who did better on the coding test actually had better coding skills than the other students. They did better for a simple reason: They tried harder. And what the labor market does value is the kind of internal motivation required to try hard on a test even when there is no external reward for doing well. Without anyone realizing it, the coding test was measuring a critical noncognitive skill that mattered a lot in the grown-up world.
Segal’s findings give us a new way of thinking about the so-called low-IQ kids who took part in the M&M experiment in south Florida. Remember, they scored poorly on the first IQ test and then did much better on the second test, the one with the M&M incentive. So the question was: What was the real IQ of an average “low-IQ” student? Was it 79 or 97? Well, you could certainly make the case that his or her true IQ must be 97. You’re supposed to try hard on IQ tests, and when the low-IQ kids had the M&M’s to motivate them, they tried hard. It’s not as if the M&M’s magically gave them the intelligence to figure out the answers; they must have already possessed it. So in fact, they weren’t low-IQ at all. Their IQs were about average.
But what Segal’s experiment suggests is that it was actually their first score, the 79, that was more relevant to their future prospects. That was their equivalent of the coding-test score, the low-stakes, low-reward test that predicts how well someone is going to do in life. They may not have been low in IQ, but they were low in whatever quality it is that makes a person try hard on an IQ test without any obvious incentive. And what Segal’s research shows is that that is a very valuable quality to possess.