Science

Lie Detectors, Russian Spies, and an Expert in Kung Fu

How the incoming national security adviser got tangled up in the Trump administration’s weirdest scandal yet.

brain fingerprinting.
Can brain waves be used to detect lies? What even is a lie?

IconicBestiary/Thinkstock

The weirdest scandal of the Trump transition—the one involving brain electrodes, Russian spies, Hillary Clinton’s email server, and an expert in kung fu—probably should have been a bigger deal. But I’m sorry to say that Bloomberg reporters David Kocieniewski and Peter Robison’s gift to journalism, published on the morning of Dec. 23, barely registered before it disappeared into the tinsel.

Their delightful scoop, headlined “Trump Aide Partnered With Firm Run by Man With Alleged KGB Ties,” describes a business link between Donald Trump’s incoming national security adviser, Michael Flynn, and a shady biotech startup called Brainwave Science. In February, Brainwave, which sells a “helmet-like headpiece fitted with sensors” as a sort of lie detector for law enforcement and counterterrorism efforts, brought on Flynn as an adviser. At that point, a biotech entrepreneur named Subu Kota was serving on Brainwave’s board of directors. As Kocieniewski and Robison point out, Kota (whose name has since been scrubbed from the company website) happens to have been indicted for trying to sell hundreds of thousands of dollars’ worth of stolen micro-organisms, as well as classified information on missile-defense systems and stealth bombers, to KGB agents during the Cold War. (He signed a plea agreement admitting to the sale of the biotech material in 1996). Flynn, who has been criticized for his own coziness with Russian officials, allegedly promised to help Kota’s company sell its sensor helmet to U.S. agencies.

This would seem to be the perfect story for the Age of Trump, encapsulating as it does a sad mélange of foreign intelligence, questionable business deals, and suspect science. But a closer look at the Brainwave scandal suggests an even deeper resonance with our present, post-factual predicament. It’s a story, after all, that layers lies on top of lies about lies: whether lies can be detected in a person’s brain waves; whether people have been telling lies about that method of detecting lies; whether other people have been telling lies about the telling of those lies; and, finally, inevitably—insanely—whether it means anything to “lie” at all, since according to the neuroscientist at the center of this mess, each one of us has the mental power to bend reality to our will.

The Bloomberg story focuses on Subu Kota, but “Braingate” really starts with the neuroscientist, Larry Farwell of Seattle. Farwell comes from a family of professors and ship captains: His grandfather Raymond was an expert in naval transportation and commerce who wrote a classic book on how to avoid maritime collisions; his father, George, was a physicist who studied under Enrico Fermi and worked on the Manhattan Project; his sister Jacqueline is a pediatric neurologist. Like George, Jacqueline, and Farwell’s uncle Raymond Jr. (another noted seaman), Farwell got his bachelor’s degree at Harvard University. For 10 years after graduation he invested in real estate and studied transcendental meditation, among other avocational pursuits. (He’s also been a semiprofessional swing-dance performer and a broadsword-wielding black belt in kung fu with a penchant for the flying kick.) Finally, in 1984, Farwell went back to school for a Ph.D. in neuroscience in the lab of the brain-electrode pioneer Emanuel Donchin.

Farwell produced extraordinary work while a student in the Donchin lab. In 1988, four years before completing his graduate degree, he and Donchin devised one of the first brain-computer interfaces for converting thought directly into speech. Their system worked through electroencephalography, or EEG—the measurement of broad oscillations in the brain’s electrical activity by electrodes placed atop the scalp. Donchin had expertise in a particular EEG brain-wave pattern called the “P300,” which corresponds to a brief change in voltage that shows up on neural traces about half a second after people are presented with a meaningful or surprising stimulus. (The name P300 refers to the fact that this signal can appear as soon as 300 milliseconds after the triggering sound or image.)

Working on his own time, he says, Farwell figured out a way to use the P300 to help people communicate just by concentrating on a grid of letters on a screen. He’d ask his subjects to focus on a single letter as he recorded from their scalps. Each row and column of the grid would be flashed in turn; whenever the target letter was highlighted, the subjects’ brain waves would display the P300 voltage bump. By repeating this process, Farwell found that he could identify a string of letters inside his subjects’ heads, and then eventually a two-word phrase. “We report here that the P300 can serve as a pencil, and that the pencil is actually rather sharp,” Farwell and Donchin wrote when they published the results. Then they added an important caveat: “The mind, however, retains control over the use of the pencil.”

Farwell had shown that brain electrodes could be used to read a person’s mind—but only when the message had been spelled out on purpose. Could the P300 signal reveal information that a person meant to hide from view? Farwell had another project in development that aimed to do exactly this. He based his work on a classic form of lie detection called the “Guilty Knowledge Test” (or else, the “Concealed Information Test”), in which suspects are asked a series of multiple-choice questions related to a crime, for example: “Was the getaway car a red Ford, a yellow Toyota, a gray Chevy, or a white Plymouth?” The interrogator checks the suspect’s physiological responses to each potential answer. According to the theory of the test, a guilty person—and only a guilty person—would know the true answer to the question, and he might give himself away by responding to that answer in a subtle or unconscious way. His heart rate might begin to quicken, or his palms would start to sweat. Though such measures are widely used for lie-detector tests conducted by the FBI, the Drug Enforcement Administration, and other government agencies, they are not very reliable.

In 1986, Farwell and Donchin announced that they’d adapted the Guilty Knowledge Test for use with brain electrodes. In their new version of the test, the correct answer to a question—e.g. the make and color of a getaway car—would serve as the meaningful stimulus that induces an automatic P300 response, at least for those hiding intimate knowledge of a crime. For everyone else, the same cue wouldn’t be meaningful at all, so there would be no P300. With funding from the CIA, Farwell and Donchin pursued this idea for several years, publishing their first, somewhat meager results in 1991. Lots more research on their lie-detector test would be necessary, they said, but the approach clearly held some promise. Brain electrodes could one day be used “in the aid of interrogations.”

By the time that paper had been published, Farwell was already on the payroll as a full-time research consultant for the CIA. (The agency would provide him with about $1 million in research funding between 1991 and 1993.) Eventually he started up a company, the Human Brain Research Laboratory Inc., based around his guilty knowledge test and the notion that a suspect’s presence at a crime scene would leave an indelible trace, like a set of fingerprints, smeared across the circuits of his cortex. If an investigator could figure out the right questions to ask during a brain-recording session, she could, in effect, dust the suspect’s brain for evidence. Any hidden knowledge of the crime would reveal itself as a telltale P300.

Farwell signed research contracts not just with the CIA but also with the FBI and the U.S. Navy. Soon he would claim to have discovered a different brain wave pattern—a more elaborated version of the P300 that lasted a full second or even longer—which he patented as the “memory and encoding related multifaceted electroencephalographic response,” or MERMER. By analyzing this entire stretch of data, Farwell claimed, he managed to achieve an astonishing 100 percent accuracy in his lie-detector tests: No false positives, no false negatives, and no indeterminate results. In a 2012 paper summarizing his work, Farwell cited evidence from 10 field studies of his method, comprising more than 130 subjects. When it’s used correctly, he said, the system pretty much always works. (Farwell estimates a real-world error rate for brain fingerprinting at something “less than 1 percent,” noting that in science, nothing is ever really 100 percent.)

Human memory may be imperfect and limited, Farwell conceded in that paper, but to the extent that any reliable information might be hidden there, brain fingerprinting will find it. “Witnesses may lie,” he said, but “the brain never lies.” His method could even help stop crimes that haven’t yet been committed: A member of ISIS, posing as an innocent migrant from a war-torn country such as Syria, could be smoked out through careful application of the P300-MERMER test, Farwell wrote in 2015. “Terrorists know who they are,” he said. “They know what terrorist training they have. They know what specific terrorism-related skills, such as firearms and bomb making, they possess. … All of this information is stored in their brains.”

For a while Farwell’s theories were warmly welcomed by the press—Time once placed him among the “Picassos or Einsteins of the 21st century,” and CNN and 60 Minutes both invited him for interviews. But more rigorous appraisals of his work have long found cause for skepticism. A federal study from 2001 reported that officials at the CIA, FBI, Secret Service, and Department of Defense were not interested in using Farwell’s brain fingerprints. According to that study, the CIA in particular had abandoned the method in 1993 after Farwell refused to reveal aspects of the science to an expert panel that had been convened to assess its technical merit. (“That’s simply not true,” Farwell said in an interview earlier this month. “I provided my algorithms, all of the math, and the source code.”)

Meanwhile, other EEG researchers (including Emanuel Donchin, who helped create the test) suggested that the P300 lie-detection method was not as reliable as Farwell was suggesting, and certainly not appropriate for real-world applications. Among other potential problems, the P300 signal seems to have more to do with a subject’s belief than actual fact—even false memories are known to yield a positive result. Then there’s the fact that the test is only as good as the questions that are used to probe for guilty knowledge. How would the investigator know exactly which specific details a guilty person is likely to remember, especially when the test may be given weeks, months or even years after a crime has been committed? Others argued that the brain-fingerprinting test, like the classic polygraph test, could be beaten by a savvy subject, or undermined by a subject who wasn’t that observant in the first place.

Donchin laid out his concerns in a nasty and personal rebuttal to Farwell’s 2012 paper, written with several co-authors and published in the same academic journal. There he accused his former student of using “grandiloquent language” to distort and misrepresent the record on brain fingerprinting. Farwell’s patented P300-MERMER technique had never been described in a peer-reviewed publication, the rebuttal argued, so there was no way of knowing if it really added any value to the P300. Also, of the 13 studies that Farwell cited in support of his claim of 100 percent accuracy, just three had been written up in peer-reviewed journals, and these comprised just 30 participants in all. In short, said Donchin and his co-authors, Farwell’s review “violates some of the cherished canons of science and … he should feel obligated to retract the article.”

Farwell responded with his own display of scholarly indignation, accusing Donchin and the other authors of having distorted facts in their rebuttal and insisting that data could still be useful even if it wasn’t in a peer-reviewed journal. (Farwell notes that some of the data in question have since been published. The original studies were classified, he says, which led to some delays.) In any case, the  latter point is true, no doubt; the published record carries just a small and biased sample of all scientific research, and it’s often very useful to consider work that hasn’t made its way to print.

But Farwell has been making some fishy-sounding claims. First and foremost, that his lie-detector test has near-perfect accuracy—a finding that is out of whack with other research in the field. A recent meta-analysis of P300 lie-detection research finds the test has an accuracy of about 88 percent. That isn’t bad at all, but it’s also not much better than what you’d get from more conventional lie-detector tests that measure people’s sweaty palms and blood pressure. (In fact, a version of the Guilty Knowledge Test based on standard polygraph measures is widely used by police in Japan.) Farwell answers that this meta-analysis surveyed all versions of the P300 test, not just his. When the test is run the way he does it—according to his list of 20 standards for the field—he claims the error rate does indeed drop close to zero.

Farwell has drawn some rather more adventurous conclusions from other work that never made its way through peer review. In the 1990s, as he wound down his research contracts with the CIA and FBI, Farwell took a break to spend a year working in an inpatient mental institution—“just to round out my experience,” he told me, “and get more hands-on experience of people who were really very seriously deranged.” He also said that he used leftover money from the government contracts to support himself as he pursued a private line of research into the nature of reality.

In 1999, Farwell wrote up the results of these experiments in a scientific treatise on quantum theory and the power of the mind, called How Consciousness Commands Matter: The New Scientific Revolution and the Evidence That Anything Is Possible. The book begins with a description of the brain-computer interface that he’d invented with Donchin in the 1980s—the device that enabled people to control a keyboard with their brains, by sending signals through a set of EEG electrodes. But Farwell had begun to wonder whether the electrodes might be extraneous. What if we could control the world, starting at the quantum level, just by thinking hard enough?

So he set out to test what he called the Conscious Unified Field Hypothesis, according to which the human mind can affect reality in tangible, seemingly impossible ways. His father, the nuclear physicist, helped him set up his main experiment: Farwell put a sample of plutonium inside a particle detector, and then he sat beside it. “My task was to command matter through consciousness,” he wrote, “to bring order into the otherwise random process of quantum particle emission, using nothing but the influence of consciousness alone.”

The book describes what happened next: Farwell sat there for a while in total silence, trying to affect the particles with his mind. A set of bar graphs fluctuated on a monitor, showing the time intervals between each release of alpha particles from the plutonium. If he could affect those intervals with his mind—that is to say, if he could exert his will over the timing of radioactive decay—then he’d have proved his theory. Sure enough, the intervals began to shift, he told me. Farwell’s mind had changed the intervals enough that he felt able to conclude—with “99.98 percent confidence,” no less—that “consciousness can and does command matter at the quantum-mechanical level.”

In Farwell’s words, he’d proved that “what has been taken to be the whole of reality in recent millennia is merely a tiny portion of reality.” That means we can all be pioneers in the exploration of higher states of consciousness, he said: “You can create the life you want. … The resources at your command are truly infinite.”

Despite this revelation, and the infinite resources that were now at his command, Farwell never found broad acceptance for his lie-detecting technology. (Nor has he found much support for his theory about the conscious control of matter.) Interest in the P300 method did resurge after 9/11, and Farwell reorganized his company to sell brain fingerprinting as a service. He says he’s made a living off that work, though he won’t discuss specific clients. (“Being in the field I’m in, there are things I can’t talk about,” he said.) Still, the business has not been as successful as he’d hoped—a fact he blames on the conservatism of the scientific establishment. On his personal website, he compares the discovery of brain fingerprinting to the invention of the airplane, claiming that it can take decades for people to grasp the significance of such a major innovation. “Those whose status or finances depend on the old ways of doing things” will always oppose scientific progress, he says, and brain fingerprinting is no exception to this rule. Still, “science always moves forward, and not backward,” he adds, “and the truth always wins in the end.”

It must have seemed providential, then, when Farwell heard from Krishna Ika in 2012. A noted swami in India, who happened to be a mutual friend, had tipped off Ika to Farwell’s work on P300s. Ika got in touch to propose a partnership: He would improve and try to automate the lie-detection technology—by simplifying the user interface, for example, and making the sensor helmet wireless—so that he and Farwell could market brain fingerprinting more effectively to an international clientele. Farwell agreed, and signed on as the “director and chief scientist” for a new company, Brainwave Science. According to Ika, Farwell signed over the patents for his technology in exchange for a 45 percent stake in Brainwave and a $10,000-per-month consulting fee. Ika also freshened up Farwell’s own marketing material with a heavy helping of B-school gobbledygook, noting, for example, that brain fingerprinting could help a client to “maximize intelligence collection disciplines across various security verticals” and “leverage forensic capabilities to unprecedented levels.”

Subu Kota, the espionage-linked businessman, joined Brainwave as a board member in 2013. In August 2014, Ika announced Brainwave’s official worldwide launch, claiming to have sold Farwell’s technology to police in Singapore and to a police department in Florida. In February 2016, Brainwave added Michael Flynn—who had been fired from his post as head of the Defense Intelligence Agency around the time of the company launch—to its advisory board. Two months after that, a friend of Flynn’s named Brian McCauley, who had just retired from the FBI, joined the board as well.

McCauley’s presence on the board would soon provide evidence for the interconnectedness of all things, or at least the interconnectedness of all scandalous shenanigans in Washington. In mid-October, the Washington Post reported on McCauley’s link to Hillary Clinton’s private email server and to documents related to the attack in Benghazi. In 2015, while still at the FBI, McCauley had proposed trading favors with the State Department, whereby the bureau would agree not to classify a Benghazi-related message from Clinton’s server. (He says that he quickly rescinded the offer when he learned the contents of the email.) Both McCauley’s and Flynn’s names have lately disappeared from the Brainwave website. Ika says they had to sever ties because both had taken jobs in the Trump administration.

Farwell, for his part, now asserts that he was duped by Brainwave. Ika lied, he said. He’d told Farwell that Brainwave would sell his brain-fingerprinting technology around the world, but then the company started offering customers something else—“a counterfeit technology that does not meet the peer-reviewed, published Brain Fingerprinting Scientific Standards.” Brainwave’s lie-detector wasn’t just a fraudulent knockoff of his product, Farwell says, but one that Ika “never succeeded in selling … to anyone.” In September, he emailed Flynn, still a Brainwave adviser, to warn the lieutenant general that the company’s fake lie detectors might pose a danger to national security. He first tried to leave the company in 2014, he adds, but wasn’t able to “extricate [himself] completely” until last summer. Despite these efforts to cut ties, the Brainwave website still includes a list of Farwell’s publications as well as his press clips and bold claims of the P300-MERMER’s “nearly infallible degree of accuracy.”

According to Ika, that story has it backward: Farwell is the one who lied. Ika says that most of Farwell’s patents had already expired when their deal was signed—and that Farwell hid this fact from him. In October 2013, Farwell reassigned the (mostly expired) patents from Brainwave Science back to their original owner, a company called American Scientific Innovations, run by one of his high school classmates from Seattle. (The patents have since been offloaded to another company affiliated with Farwell.) Ika claims that Farwell did not have the authority to make this transfer and that he falsely presented himself to the U.S. Patent Office as a “managing member of the company” so as to steal Brainwave’s intellectual property. After discovering the reassignment in July, Ika says, he called the FBI and terminated the consulting contract with Farwell.

Ika also stands behind his claim of having signed brain-fingerprinting contracts with police in Singapore and Florida—though it turns out that the latter deal, at least, began and ended with a free-trial period. No money was exchanged., and the technology was never put to use.

“I’ve told you the truth about Mr. Ika, and I take no pleasure in telling you those things,” Farwell told me in response to these claims and counterclaims of fraud. Ika’s version is rife with misinformation, in his telling. The original deal from 2012 was never signed, he says, so the original transfer of the patents was itself a fraudulent attempt to pilfer his intellectual property. Also: He and his business received 49 percent of Brainwave Science, not 45 percent as Ika claimed; and his consulting contract had been for $11,000 per month, as opposed to $10,000.

By this point what I’d understood to be the truth now seemed to be, as Farwell might say, “a tiny portion of reality.” I had no idea exactly who was lying and to what extent. Did Brainwave really sell its product to police in Singapore? Does Brainwave’s lie detector really work as advertised? Does Farwell’s? Who owns those patents, and why should that matter if they’re all expired anyway? How involved was Michael Flynn? Did Subu Kota sell secrets to the KGB? And while we’re at it, how much did Hillary Clinton know about Benghazi?

It would be nice if we could just slap EEG arrays on everybody’s heads and find the truth in the dips of their P300s. In fact, that’s just what Farwell proposed to me: “We can resolve all this,” he said, “if everyone would agree to a brain-fingerprinting test.” Someday, perhaps. Whatever one thinks of Farwell’s science, recent academic research on the P300 lie-detection method has been somewhat promising. A version of the test developed at Northwestern University, for example, has now delivered greater than 90 percent accuracy in lab experiments. In October, an independent group of researchers in Hungary successfully replicated that work. We may yet find a way to use our brain waves to navigate through squishy facts.

Until that dream is realized, though, we’re all stuck here on this lowly plane of consciousness—where it often feels like anyone can make his own reality, and even find great success, living by the facts inside his head.