Exactly how all of this works is not yet fully clear. But the process appears to make neurons in the stimulated area more malleable, so that new connections form more readily while under the influence of the current. It remains to be seen whether those changes are short-lived or enduring, but at least one study has found positive effects persisting for up to six months. The beauty of it, in theory, is that the electric current doesn’t rewire the brain on its own—it just makes it easier for the brain to rewire itself.
At this point it seems obvious that this is far too good to be true. So what’s the catch?
The catch is that we don't know what the catch is. And to Peter Reiner, a neuroscientist at University of British Columbia, that's a biggie. If tDCS can so quickly change the brain in ways that we can easily measure, he says, there’s a good chance it could also change the brain in ways we can’t easily measure—or that researchers so far haven’t tried to measure. Scientists often assume they can target the effects of tDCS by stimulating only the part of the brain relevant to the task that the subject is concentrating on. But most would admit there’s some guesswork involved, since brain topography can vary from one person to the next. And Reiner warns that there’s no guarantee the subject’s mind won’t wander, say, to “something horrific that occurred earlier today.” What if tDCS ends up forging traumatic connections along with useful ones?
Less dramatically, it seems plausible that researchers are overlooking subtle drawbacks of tDCS. One of the first papers to identify downsides to the procedure was published last month in the Journal of Neuroscience, titled “The Mental Cost of Cognitive Enhancement.” Subjects who had their parietal lobes stimulated during a numerical-processing task performed better than those who received fake stimulation. But a week later, they struggled to apply the newly learned techniques to a different task. “They had trouble accessing what they’d learned,” study co-author Roi Cohen Kadosh of Oxford told Wired. Subjects who had a different region of their brain stimulated during the task showed the opposite effect, performing slowly at first but better at week’s end.
As for the amateur applications, you don't need to be an Oxford professor to deduce that building your own tDCS kit and sticking electrodes onto your head willy-nilly might have some adverse consequences. All you need are some YouTube videos, like the one in which a teen describes his first forays into electrical brain stimulation. “I’ve been experimenting on where, which places on my head would improve memory—more specifically, visual memory,” the young DIY-er says in the video. “So I’ve been thinking, OK, does the left dorsal, uh, prefrontal cortex—which I thought would be around, about, left side, right here (points to head)—and that would be where the cathode goes, and the anode would go right up here. Well I put it on, and after about five minutes, I felt really angry and depressed. So … I guess that wasn’t a good idea.”
An observant YouTube commenter pointed out that the young man had reversed the anode and the cathode, a mistake akin to putting the wrong jumper cables on your car battery. “Flip it around and try again,” the commenter suggested.
The fact that tDCS may pose unknown risks, that its benefits and drawbacks are not yet fully understood, that it can be dangerous in the wrong hands—none of these arguments should keep scientists from carefully exploring its potential. Having spent the better part of two months immersed in the vertiginous world of human enhancement, I’ve become convinced that societal and academic taboos against the use of technology to give healthy people extraordinary powers are, on the whole, counterproductive. College students are already popping Adderall in droves. Body hackers are implanting microchips in their bodies. Entrepreneurs are hawking tDCS kits for $99 online. Some athlete, somewhere, is probably experimenting with gene doping. The riskiness of some of these behaviors makes it tempting to simply outlaw them all and expect everyone to comply. But that’s as unrealistic as it is blinkered.
This isn’t a call to legalize everything and let God or Darwin sort ’em out. It’s a plea to lawmakers, the media, academics, and those who fund academic research to take seriously the growing availability of and demand for human-enhancement technologies. Only by acknowledging and researching their potential benefits as well as their risks can we hope to craft mature policies that promote public safety and welfare. If that means continuing to classify Adderall as a Schedule II controlled substance until we’re even more convinced that it doesn’t pose long-term health risks, so be it. But here’s where we’re going astray: One university professor who studies ADHD drugs told me he has learned that every public-health research paper “has to have a certain (cautionary) tone to it” in order to be accepted for publication. “I know what I have to write, and it has to be, basically, ‘Drugs are bad.’ ”
Maybe he’s wrong. But I’ve talked with enough academics over the past two months who flat-out refused to even discuss the potential use of various medical technologies for human enhancement—or to even have their name attached to an article that discusses them—to suspect that there’s some legitimacy to his paranoia. Too many people seem to think that humans are fine the way we are, and that the only proper use of these technologies is to restore “normal” human functions to people with disabilities.
Why is that short-sighted? As Duke philosophy professor and bioethicist Allen Buchanan told Ross Andersen in the Atlantic: