Jurisprudence

Jury-Rigging

Can a computer pick a better jury than a high-priced consultant?

Picking the jury that ended up acquitting Andrea Yates on retrial started off easily. Given that his delusional client had drowned her five children in a bathtub, attorney Wendell Odom used some of his 13 peremptory challenges to strike all potential jurors who seemed skeptical that mental illness could ever excuse criminal conduct. Next came tougher calls: Odom liked a well-educated teacher—until she revealed her brother had been murdered. He struck her, too.

With only a few challenges left, Odom reached the jurors he calls “the medium ones.” Not the sort to read Mother Jones or Soldier of Fortune, these folks work mid-level jobs at mid-sized companies and are married with two kids. “You don’t know what the hell to say about that,” he says. “There was one where we would have had to just flip a coin.” Better than a quarter was the software, called JuryQuest, running on a laptop computer in the courtroom. And the software advised Odom to use his last strike on a woman so unassuming he can’t now remember the slightest detail about her.

The software knew nothing about the final panel of jurors that Odom didn’t, but it liked his chances. On a scale of one to 100, it rated seven of them as significantly biased in favor of the defendant before the trial even began. That wasn’t a bad guess: When the Harris County, Texas, jury began its deliberations, eight of its members were on Yates’ side. Two days later, the majority persuaded their colleagues that Yates could not have known right from wrong.

That a computer program might become a vital tool in predicting juror bias is perhaps less surprising than the fact that it isn’t already: The necessary mathematical formulas and attitude scaling techniques were developed by the early 20th century. To rate a jury pool, the program needs only seven pieces of information: age, sex, race, education, occupation, marital status, and prior jury service. Once entered, the program uses factor analysis to match the categories against a 4 million-item database built from survey questionnaires designed to identify authoritarian (prosecution-friendly) versus egalitarian (defense-sympathetic) bias.

JuryQuest’s chief drawback may well be its effect on an attorney’s pride. While nominally intended only as a means of dismissing those individuals too prejudiced to serve, jury selection is, in practice, a testing ground for a trial lawyer’s talent. Deft questioning will lead some jurors into betraying overt sympathies, but with jury pools sometimes in excess of 100, intuition and ego get their due. When it comes to deciding whether to strike minorities or guys with beards from the jury pool, attorneys just don’t want to believe a computer could do it better.

There’s precedent for such doubt, though not in intuition’s favor. In fields from day trading to dating, computerized factor analysis and other statistical techniques have steadily eroded the supremacy of the savant. Even baseball has taken a hit: In 2003’s Moneyball, Michael Lewis documented how Oakland A’s general manager Billy Beane used a statistician with a laptop to sideline scouts who had spent a century polishing reputations for guru-grade wisdom. Stolen bases and RBIs—some of the game’s most cherished stats—were overrated, as were appearances: Statistically sound players had been snubbed for no greater flaw than a pear-shaped physique.

JuryQuest’s own early record suggests similar promise. After what co-founder Norm Revis says was “less than millions” of dollars in research, JuryQuest released its software in 2005. Out of the roughly 100 criminal trials in which the software has been used, defense attorneys have won around 50 percent, Revis says; that’s far greater than the national average. And even in cases where JuryQuest-approved panels convict, they tend to buck the DA’s sentencing recommendation in favor of lighter sentences. The software can be licensed for as little as a few hundred dollars a trial, and there’s reason to believe it’s going to keep getting better: Empirical results from JuryQuest trials are promptly fed back into its database, gradually supplanting and improving its original data.

Scientific jury selection was first tested in the early 1970s by sociologists enlisted in the defense of the “Harrisburg Seven,” anti-war activists on trial for an alleged conspiracy to destroy selective service records and kidnap Henry Kissinger. The defense based its jury selection on locally collected survey data, methodically striking the Harrisburg citizens least likely to sympathize with dissidents. The resulting panel was far from the hanging jury expected from the conservative Pennsylvania city. Jurors convicted the activists of only one minor offense, deadlocking on more serious charges.

In decades since, scientific jury selection has migrated from its radical roots into the polished, and costly, field of trial consulting. While Court TV and John Grisham celebrate jury consulting for its dark clairvoyance, its leading practitioners toil diligently—working up survey-based demographic statistics, staging mock trials, and empaneling focus groups. But their inquiries are usually based on widespread assumptions about a case—assumptions about whether a particular client would be best served by emotive jurors, science-friendly jurors, or jurors sympathetic to women.

And that kind of tailoring doesn’t come cheap.Even no-frills advice can run five figures, a threshold that makes expert consulting the province of high-stakes tort litigation and, more recently, celebrity mega-trials. The prosecution in the Scott Peterson case had a top-notch jury consultant. Then again, so did the defense.

What JuryQuest has in common with its human competitors are clients hamstrung by a shortage of meaningful information about the strangers rounded up for jury duty. Where the program differs is in its approach. Instead of using subtle behavioral clues to plumb for concealed opinions, JuryQuest seeks meaning in only superficial traits. People are either sympathetic to someone accused of a crime or they’re not, JuryQuest posits. That’s it.

The program’s results often approximate those of intuitive reasoning. Asked to judge an Asian engineer in his late 20s, a professional jury consultant might cite recent sociology work documenting generational attitude differences in Asian-Americans before concluding that he isn’t a major threat to the defense. JuryQuest doesn’t tell you that; it just spits out a number right in the middle of its bias spectrum. According to either method, he’s not the guy to blow a challenge on.

That juror bias could be crammed onto a single axis strikes some observers as less elegant than simplistic. Victor Gold, a professor at Loyola Law School who specializes in jury selection, said a program like JuryQuest would likely be of only limited value. “It would be as if someone came up with a system where you run a form through a database, and we come up with your ideal spouse,” Gold says. “To have a lot of faith in it is foolish. Juries are far more complex than any computer program can address.”

Gold’s got a point. And in fact, attorneys who use JuryQuest do seem most comfortable using it the same way singles use the suggestions of an Internet dating site. By aligning significant but underappreciated qualities in its subscribers, the leading dating sites analyze potential romantic pairings in order to provide suggestions, confirmations, or the rationale for cold feet. No attorney would let JuryQuest impanel a jury on its own, just as Match.com’s clients wouldn’t blindly accept the site’s suggestions for whom to elope with.

But whether the Web site’s suggestions might result in a closer look and possibly even drinks with an otherwise middling candidate is another matter, and one with some parallels to picking juries. Relying on probability in either love or law might not be ideal, but it’s a pragmatic place to start.