Medical Examiner

Brain Scans for Sale

As brain imaging spreads to nonmedical uses, will commerce overtake ethics?

The brain-imaging technology developed over the past three decades—first positron emission tomography, or PET, and more recently the faster, simpler functional magnetic resonance imaging known as fMRI—has given neuroscience a tool of unprecedented power. By tracing blood flow associated with neuronal activity, scanning methods enable researchers to see how different regions of the brain activate as a person thinks or acts. A subject, lying in a scanner, completes mental tasks or responds to various stimuli—solving a simple word puzzle, say, or a more complex task like characterizing facial expressions. As the subject works, the scanner tracks changes in blood flow to create images showing distinctive patterns of neuronal activation. The result is a visual representation of the “neural correlates” of various mental states.

At first this technology served primarily to refine a basic map of the brain’s main functional areas—showing, for instance, that certain regions in either hemisphere process and generate language or that the amygdala, an almond-sized area near the brain’s center, acts as a sort of hub connecting sensory perception, emotion, and memory. Researchers also discovered patterns characteristic of difficult-to-diagnose afflictions ranging from autism to schizophrenia. But perhaps the most intriguing progress, most of which has come in the past five years, has been researchers’ increasing ability to identify patterns distinctive to many of our more complex mental processes. Scan studies have tracked the maturation of decision-making regions during adolescence; clarified how we store, retrieve, and lose memories; and identified the neural correlates of fear, distraction, and affection, as well as of various character traits, including extraversion, empathy, and persistence. They’ve even seen patterns of alarm when volunteers viewed faces of people of another race—a sort of neural correlate of racism. Researchers find new correlations every month.

Neurologists stress that cognitive neuroscience is still young, its tools too rough and knowledge too patchy to predict behavior and diagnose personality. Even fMRI, the finest-grained tool, cannot capture events at the minute scale and lightning speed of the neuron. And while a certain activation pattern may be common to most murderers, for example, too many diseases and characteristics remain unexplored to know that the same pattern couldn’t also show up in a Grand Theft Auto fanatic.

Despite these caveats, some entrepreneurs and researchers are carrying brain imaging into new, nonmedical territory that could be ethically treacherous. Some of these uses, such as lie detection, are already upon us; others, such as the use of brain scans to screen job applicants, seem almost certain to be explored or developed. Close behind the neuroentrepreneurs are neuroethicists at places like the University of Pennsylvania’s Center for Cognitive Neuroscience and the Stanford Center for Biomedical Ethics, who are trying to identify and resolve the ethical concerns raised by these applications: Are scanning technologies really appropriate for nonmedical uses? If personal information is collected by a nonmedical commercial interest, how can we ensure its confidentiality?

Perhaps the best-known and possibly least threatening nonmedical use of scanning is the emerging “neuromarketing” industry. At least one well-funded firm, Brighthouse Neurostrategies Group, is trying to learn how to better market everything from licorice to liquor by scanning volunteers as they view ads or other media to see how different advertising approaches activate different brain areas. This strikes many as offensive; do we need yet more insidious ways to stir consumer lust? Yet neuromarketing, while perhaps in poor taste, seems harmless next to other possibilities.

More problematic is the use of brain-testing for high-tech lie detection. Neurologist Larry Farwell’s Brain Fingerprinting Laboratories is the most prominent such outfit. Farwell contracts with public and private investigators to conduct a brain-wave analysis called multifaceted electroencephalographic response analysis, or MERA, that he claims can tell whether a suspect is familiar with evidence—a crime scene, a face, a piece of furniture or clothing—that would be known only to the perpetrator of a particular crime. The suspect views a series of images on a computer screen while wearing a little cap full of EEG-like sensors; the sensors pick up a distinctive burst of neuronal activity when the suspect sees something familiar. Most neurologists consider this method sound. It’s the application of it that gets messy—who uses it, whether proper controls are established, whether the images shown could truly be known only by whoever committed the crime. In high-profile cases like those Farwell has worked on, such as the successful effort to free wrongly convicted murderer Terry Harrington, such issues get close scrutiny. But if brain fingerprinting becomes common, shoddy or dishonest technique could produce false convictions.

The most complex, fraught, and uncertain aspect of brain imaging being discussed by neuroethicists is the potential these technologies hold for screening job and school applicants. This so far remains more a hypothetical notion than a budding industry, and no company or school has announced plans to scan applicants. Yet many ethicists feel the temptation will be overwhelming. How to resist a screen that can gauge precisely the sorts of traits—persistence, extroversion, the ability to focus or multitask—that make good employees or students ?

The legality of such use is unclear. The relevant federal laws, the American With Disabilities Act and the Health Insurance Portability and Accountability Act (which governs privacy of medical information), allow pre-employment medical tests only if they assess abilities relevant to a particular job. An employer couldn’t legally scan for depression or incipient Alzheimer’s. Yet it’s possible an employer could legally use a brain scan to test for traits relevant to a particular job—risk tolerance for a stock-trading job, for instance, or extroversion for a sales position. An additional attraction of brain scanning is that a tester can evaluate these and other traits while an applicant performs nonthreatening, apparently unrelated tasks—like matching labels to pictures. An unscrupulous employer could fashion such tests to covertly explore subjects that would be off-limits in an interview, such as susceptibility to depression, or cultural, sexual, and political preferences.

Finally, widespread brain testing poses the risk that the results could be filed away in databases marketed to prospective employers, lenders, health and life insurance companies, or security officials, similar to the way credit rating information is now. Present law would forbid this if the scans were considered medical information. But if they were ruled nonmedical—or if consent were obtained, as consent for releasing certain medical information to insurers or employers often is now—some sharing might be allowed.

How likely are these things, really? Your opinion on this will likely depend largely on your faith in how well the legal system will protect privacy and how well any emerging neuroinformation industry will heed ethical guidelines. Nonmedical brain imaging currently falls under no regulatory agency’s purview. And the response of both industry and government will likely depend partly on public awareness and pressure. To the extent it pays attention, the public today seems to view neuroscience as a curiosity. But should a new brain-testing industry start to seem heedless or brash—lacking that adultlike prefrontal control, as it were—we may want to start setting limits.