Neuroscientists: Mercenaries in the Courtroom
Using brain imaging to select jurors and more could have disastrous results.
Photograph by Eric Feferberg/AFP/Getty Images.
This article arises from Future Tense, a partnership of Slate, the New America Foundation, and Arizona State University that explores emerging technologies and their implications for public policy and for society. On Monday, Oct. 22, Future Tense will host “My Brain Made Me Do It,” an event in Washington, D.C., on how the legal system will adapt to changes in neuroscience. For more information and to RSVP, or to watch the live stream, visit the New America Foundation’s website.
Neuroscientists aren’t usually thought of as advocates for special interests. They’re a generally objective bunch, dedicated to their discipline and concerned above all with making solid contributions to understanding how our brains work. Advances in understanding and treating Alzheimer’s, Parkinson’s, multiple sclerosis, and a host of other diseases and conditions are largely attributable to the commitment of neuroscientists focused on solving some of the most difficult problems in medicine.
But, over the past decade, as neuroscience—and brain imaging in particular—has become a star science attraction, the role of the impartial neuroscientist has been redefined. When the forces of marketing realized that neuroscience could assist in predicting consumer behavior, neuroscientists became a hot commodity as “consultants” to some of the biggest brands on the planet. Soon “neuromarketing” was born, and firms armed with fMRI machines started becoming mainstays at consumer focus groups for Fortune 500 companies.
A similar story is playing out in the legal arena—but the stakes are much higher. When neuroscientists are recruited to weigh in on critical issues like lie detection and the alleged mental state of a defendant, people’s lives, and not just their wallets, are directly affected. But much of this technology is too new to be reliable. Furthermore, neuroscience experts aren’t just being used on the stand—they are also being paid to help select, even sway, juries, and that poses an entirely new ethical dilemma.
Neuroscience can potentially be used in court to facilitate lie detection, explore the mindset of a criminal before and after a crime, and qualify the veracity of eyewitness testimony. The 2010 case U.S. v. Semrau was the first time a federal court ruled on whether fMRI-based lie detection could be considered by a jury during a criminal trial. While the court determined that the detection method wasn’t suitable for the case (and that ruling was upheld by the 6th Circuit Court of Appeals recently), it nevertheless set a precedent: that fMRI will continue to be introduced in more cases for the same purpose. It’s possible that fMRI will follow a similar path as DNA testing, which was met with criticism in its early days (even the O.J. Simpson trial in 1995 was affected by scrutiny about DNA validity) but eventually became a scientifically accepted courtroom technique.
A critical difference between DNA testing and fMRI is that while fMRI is a very promising technology, it is still just that—promising. DNA validity rests on a mountain of evidence with which fMRI, as of yet, can’t compare. For example, different parts of the brain light up depending on whether a person is telling the truth or lying, but neuroscientists themselves aren’t able to agree why that happens, which makes the prospect of using the technology in court all the more alarming. Furthermore, fMRI can produce brains scans with significantly different neural activity patterns under the exact same testing conditions, making study replication difficult. If study replication poses a problem for neuroscientists even under well-controlled laboratory conditions, then is the technology truly ready for rendering verdicts of truth of innocence in court?
Now, neuroscientists are being used to help guide jury selection—even to influence jurors in the courtroom. This is a substantially different role than providing expert testimony, in which the expert’s input becomes part of an amalgam of influences. Consultants in “neurolaw” are brought in to educate attorneys on how jurors make decisions, the cognitive biases they’re subject to, and how their opinions are formed. Seminars with titles like "The Art and Science of Jury Selection: What Neuroscience Can Tell Us About How Jurors Think and Decide" are increasingly common in the legal world. But this advice could go even more high-tech.
A 2008 study titled “The Neural Correlates of Third-Party Punishment,” published in the journal Neuron, approached the juror-influence question from a different angle. Researchers scanned participants with fMRI while they determined the appropriate punishment for crimes running the gamut from mild to severe, with varying levels of defendant responsibility. The researchers found that activity in the amygdala and areas of the medial cortex predicted the magnitude of punishment, while brain activity in the right side of the prefrontal cortex correlated with determining whether or not a criminal was directly or indirectly responsible for the crime.
That means screening jurors with fMRI—when that method eventually becomes practical—could instruct attorneys on both sides of a case whether a given juror would be more or less likely to dole out punishment for a crime and whether she’d judge a defendant as “responsible” for his or her actions. This introduces an element of “preknowledge” about jurors’ thinking that would make it easier for attorneys to tailor a jury to suit their needs. The question is, do we want to give prosecutors and defenders that level of access to jurors’ thought processes, and would doing so unfairly influence the outcome of a trial?
Basically, using fMRI in the courtroom is ethically troubling for at least two reasons: 1) As of yet, neuroscientists are not in universal agreement about the correct interpretation of brain imaging results and will likely not reach agreement for some time—so they are potentially fleecing their clients, and 2) in the future, carried to its limit, use of fMRI in court might negate the entire idea of the jury trial or at least our current jury system, since it would provide the means to manipulate outcomes by carefully tailoring juries. At the very least, in a case of two sides with equally effective neuroscientists, the jury could be purposely deadlocked.
We are obviously a distance away from installing fMRI machines in jury selection rooms, but down the road, as the technology becomes more reliable, cost efficient, and portable, it’s not far-fetched. It’s crucial that we start asking the hard questions now about the future role of neuroscience in the courtroom, and not allow hype about the promise of brain imaging to overshadow the limitations of the technology. If hype, rather than facts, assumes control, neuroscience and its tools may be pushed into a pivotal role in court well before its time, and well before we’ve taken the time to consider the consequences.
David DiSalvo writes at the intersection of science, technology and culture for Forbes, Scientific American Mind, Psychology Today, and other publications. He is also author of the book What Makes Your Brain Happy and Why You Should Do the Opposite.