Jury-rigging

The law, lawyers, and the court.
Aug. 10 2006 3:46 PM

Jury-Rigging

Can a computer pick a better jury than a high-priced consultant?

Illustration by Robert Neubecker. Click image to see expanded view.

Picking the jury that ended up acquitting Andrea Yates on retrial started off easily. Given that his delusional client had drowned her five children in a bathtub, attorney Wendell Odom used some of his 13 peremptory challenges to strike all potential jurors who seemed skeptical that mental illness could ever excuse criminal conduct. Next came tougher calls: Odom liked a well-educated teacher—until she revealed her brother had been murdered. He struck her, too.

With only a few challenges left, Odom reached the jurors he calls "the medium ones." Not the sort to read Mother Jones or Soldier of Fortune, these folks work mid-level jobs at mid-sized companies and are married with two kids. "You don't know what the hell to say about that," he says. "There was one where we would have had to just flip a coin." Better than a quarter was the software, called JuryQuest, running on a laptop computer in the courtroom. And the software advised Odom to use his last strike on a woman so unassuming he can't now remember the slightest detail about her.

Advertisement

The software knew nothing about the final panel of jurors that Odom didn't, but it liked his chances. On a scale of one to 100, it rated seven of them as significantly biased in favor of the defendant before the trial even began. That wasn't a bad guess: When the Harris County, Texas, jury began its deliberations, eight of its members were on Yates' side. Two days later, the majority persuaded their colleagues that Yates could not have known right from wrong.

That a computer program might become a vital tool in predicting juror bias is perhaps less surprising than the fact that it isn't already: The necessary mathematical formulas and attitude scaling techniques were developed by the early 20th century. To rate a jury pool, the program needs only seven pieces of information: age, sex, race, education, occupation, marital status, and prior jury service. Once entered, the program uses factor analysis to match the categories against a 4 million-item database built from survey questionnaires designed to identify authoritarian (prosecution-friendly) versus egalitarian (defense-sympathetic) bias.

JuryQuest's chief drawback may well be its effect on an attorney's pride. While nominally intended only as a means of dismissing those individuals too prejudiced to serve, jury selection is, in practice, a testing ground for a trial lawyer's talent. Deft questioning will lead some jurors into betraying overt sympathies, but with jury pools sometimes in excess of 100, intuition and ego get their due. When it comes to deciding whether to strike minorities or guys with beards from the jury pool, attorneys just don't want to believe a computer could do it better.

There's precedent for such doubt, though not in intuition's favor. In fields from day trading to dating, computerized factor analysis and other statistical techniques have steadily eroded the supremacy of the savant. Even baseball has taken a hit: In 2003's Moneyball, Michael Lewis documented how Oakland A's general manager Billy Beane used a statistician with a laptop to sideline scouts who had spent a century polishing reputations for guru-grade wisdom. Stolen bases and RBIs—some of the game's most cherished stats—were overrated, as were appearances: Statistically sound players had been snubbed for no greater flaw than a pear-shaped physique.

JuryQuest's own early record suggests similar promise. After what co-founder Norm Revis says was "less than millions" of dollars in research, JuryQuest released its software in 2005. Out of the roughly 100 criminal trials in which the software has been used, defense attorneys have won around 50 percent, Revis says; that's far greater than the national average. And even in cases where JuryQuest-approved panels convict, they tend to buck the DA's sentencing recommendation in favor of lighter sentences. The software can be licensed for as little as a few hundred dollars a trial, and there's reason to believe it's going to keep getting better: Empirical results from JuryQuest trials are promptly fed back into its database, gradually supplanting and improving its original data.

Scientific jury selection was first tested in the early 1970s by sociologists enlisted in the defense of the "Harrisburg Seven," anti-war activists on trial for an alleged conspiracy to destroy selective service records and kidnap Henry Kissinger. The defense based its jury selection on locally collected survey data, methodically striking the Harrisburg citizens least likely to sympathize with dissidents. The resulting panel was far from the hanging jury expected from the conservative Pennsylvania city. Jurors convicted the activists of only one minor offense, deadlocking on more serious charges.

In decades since, scientific jury selection has migrated from its radical roots into the polished, and costly, field of trial consulting. While Court TV and John Grisham celebrate jury consulting for its dark clairvoyance, its leading practitioners toil diligently—working up survey-based demographic statistics, staging mock trials, and empaneling focus groups. But their inquiries are usually based on widespread assumptions about a case—assumptions about whether a particular client would be best served by emotive jurors, science-friendly jurors, or jurors sympathetic to women.

And that kind of tailoring doesn't come cheap.Even no-frills advice can run five figures, a threshold that makes expert consulting the province of high-stakes tort litigation and, more recently, celebrity mega-trials. The prosecution in the Scott Peterson case had a top-notch jury consultant. Then again, so did the defense.

  Slate Plus
Slate Picks
Dec. 19 2014 4:15 PM What Happened at Slate This Week? Staff writer Lily Hay Newman shares what stories intrigued her at the magazine this week.