Fancy Concussion Tests Won’t Protect Our Student Athletes

Health and medicine explained.
Jan. 20 2012 12:32 PM

Check Your Head

Does testing athletes for concussion with fancy software do any good?

(Continued from Page 1)

A more common issue is post-concussion syndrome—in which the symptoms of a concussion linger for weeks or even months following a head injury. (Most concussions resolve within a few days.) In theory, returning to action too soon after a head injury can increase an athlete's chances of developing a persistent post-concussion syndrome, although right now, that's still just a hypothesis. But even here, tests like ImPACT aren't superior to the trained judgment of an athletic trainer or doctor, who can check for those lingering symptoms and make sure the athlete stays on the sidelines until they’re gone.

Much of ImPACT’s allure lies in its ability to turn symptoms into a score (something SCAT2 can do for free), and to detect more subtle cognitive impairments—slight memory loss, for example—that might elude detection by a trainer or physician. But it’s not enough for a test to produce numbers. You have to know what to do with the data the test churns out, and that’s where things get confusing.

The test produces scores for five different areas— motor processing speed, reaction time, visual memory, impulse control, and verbal memory. But it's hard to know exactly what these scores mean. Does a drop in the score for reaction time mean that an athlete's brain is impaired, or just that she hasn't yet had her coffee? What if an athlete improved on one measure but regressed on another?

Advertisement

The adolescent brain is still developing, and while differences in scores from one test to the next could represent post-concussion syndrome, they might also reflect the myriad other factors that can affect a young person’s cognitive abilities from day to day—everything from sleep to attitude to learning. Or maybe some overeager athletes are intentionally flubbing their baseline tests, so they can beat their scores and stay on the field even with minor symptoms. It may sound far-fetched that kids would try to game the test, but that's one of Lovell's main selling points: He says his computer program can catch athletes who try to hide their symptoms—the ones who pretend their headaches have gone away so they can get into Friday night's game. As for the kids who try to botch their baseline, Lovell says he's come up with a “validity scale” to catch them. “It basically looks for deviant performances. It’s our secret sauce.”

All that aside, if you’re to trust the numbers, the ImPACT test would need to produce the same scores on a given kid each time he or she takes it in an unimpaired state. That doesn't always happen. In one independent study, 118 healthy student volunteers took a baseline ImPACT test andthen returned to retake the test twice more, 45 and 50 days later. In the follow-ups, more than one-third of the concussion-free participants showed up as false positives, which made it seem as if they really had the symptoms of a concussion and were maybe lying about the symptoms. Lovell points to the fact that this study was published in the second-tier Journal of Athletic Training, rather than a more respected neurology journal. But while it's true that other studies have found slightly better correlations from test to test, critics say there's still so much variability between the baseline and follow-ups that it's virtually impossible to use them for calculating an overall probability of impairment. Two more studies, published last month examined the usefulness of ImPACT and concluded that it has very little practical value. The reliability of the test is "unacceptably low," one warned, before saying that "the empirical evidence does not support the use of ImPACT testing for determining the time of postconcussion return to play."

So why have hundreds of high schools, colleges and professional sports teams have adopted ImPACT testing? In a word: money. Every time a player gets seriously hurt, it creates hysteria that something needs to be done. “The response is, let’s throw some money at something, and ImPACT is there to say, we’ll take your money,” says Robert Sallis, the sports physician. ImPACT sells some more tests, and those in charge can show that they're doing something and protect themselves from potential lawsuits. It’s not just ImPACT getting a cut. Neuropsychologists and other experts who pony up a yearly $1,500 subscription fee are christened “certified ImPACT consultants” and promised “access to ImPACT’s sports concussion business practice tool,” “extra public relations assistance with your local media,” and “excellent referral opportunities.” Neuropsychologists can charge $500 or more to interpret the test, turning a mild concussion into a $500 or $1,000 bill. And while the schools that sign up for ImPACT through the “public service” program get free testing for the first year, they’re on the hook after that.

This is more a clever marketing ploy than sound medical practice. Coaches don't need a computerized test to prevent concussed athletes from going back on the field before they're symptom-free, they need a sporting culture that takes concussions seriously and makes it OK to sit out a game because you're hurt. If we really care about the dangers of concussions, we should be trying to prevent them in the first place, and that's something ImPACT testing never addresses. Which may explain some of its appeal—it gives the illusion of doing something about concussions without the bother of changing the game.

Christie Aschwanden is an award-winning writer and contributing editor for Runner's World. She blogs about science at Last Word on Nothing.

  Slate Plus
Working
Dec. 18 2014 4:49 PM Slate’s Working Podcast: Episode 17 Transcript Read what David Plotz asked a middle school principal about his workday.