Medical Examiner

What’s the Difference Between Life and Death?

We’re fretting too much about the distinction.

A 1728 oil painting by Cornelis Troost depicting an anatomy lesson using a cadaver.
A 1728 oil painting by Cornelis Troost depicting an anatomy lesson using a cadaver

Amsterdam Museum/Wikimedia Commons.

Two hundred years ago, a Scottish medical student named Robert Christison watched a human vivisection.

It was inadvertent; the subject was meant to be dead. But in the days before people willingly left their bodies to science, surgeons stole them. The aftermath of judicial hangings was a competition between “the relatives and the [surgical] students—the former to carry off the body intact, the latter to dissect it,” Christison wrote in his autobiography. “Thus dissection was apt to be performed with indecent, sometimes with dangerous haste. It was no uncommon occurrence that, when the operator proceeded with his work, the body was sensibly warm, the limbs not yet rigid.” Hangings were sometimes ineffective, and the condemned survived. No wonder then that occasionally, in their rush, surgeons got it wrong and opened up a body to demonstrate its anatomy only to discover it was not yet a corpse.

Even if you’re in less of a rush, simple observation has always been worryingly fallible when it comes to distinguishing life from death. When I was a junior doctor, I recall the hairs on the back of my neck slowly rising as I walked toward a patient’s room. His family had just stopped me at the end of their visit, saying “I think we’ll come back tomorrow, we’ve been sitting with him for an hour and he’s seemed awfully quiet.” He would forever remain that way. I found I often made the reverse mistake: Walking into the room of an elderly patient, it could take some time to recognize their stillness as that of sleep.

Preceding generations adopted technological aids to help them. Holding a mirror over a face to see if it misted up could be genuinely useful. The stethoscope—invented by a French doctor, Rene Laennec, who was embarrassed by putting his ear to his patient’s bosom—meant that respiration and heart sounds could be listened for more accurately. All this helped, but it didn’t fully solve the problem.

The precise division between life and death has always been unclear. In the 18th century, the chemistry of living (organic) and nonliving (inorganic) things was held to be fundamentally different. Into the former, God placed a spark of life—meaning that biochemical processes were absolutely different from the chemical reactions that could be created by mankind or the natural world. That belief was shown false in the 1820s, when a German chemist, Friedrich Wöhler, synthesized the first organic molecules. But even today it lingers on: The vague way in which organic is used as a euphemism for healthy and good is its relic. Throughout the 19th century, the exact spark of life remained an object of great interest, and also of great doubt.

Discussions of the soul tended to lead nowhere, since that word meant so many different things to different people. It was hard to prove when the soul left the body because it was something whose nature and identity no one could agree on. Hence a favorite distinction between the living and the dead rested directly on the word of God. Leviticus 17:11 and 17:14 were clear: Blood was the stuff of life. William Harvey, who discovered how blood circulated, wrote that it was “the first to live and the last to die.” Blood was life. So long as it was liquid, life remained.

Hence Christison’s alarm as he watched the surgeon cut into the warm body. “Fluid blood gushed in abundance from the first incisions through the skin … Instantly I seized [the surgeon’s] wrist in great alarm, and arrested his progress; nor was I easily persuaded to let him go on, when I saw the blood coagulate on the table exactly like living blood.” Peer pressure overcame his qualms, however, and he not only released the surgeon but remained part of the attentive audience. He was convinced that the man was alive, but he became willing to watch all the same.

John Hunter, the greatest surgeon of the 18th century, also believed that those whose blood was liquid were still alive, yet he had no problem slicing their hearts out—or even, in the interests of science, tasting them. (Wishing to explore human sexual function, he acquired the corpse of a man who died in the moment before ejaculation. When held in the mouth, Hunter reported, the dead man’s semen had a slightly spicy taste.) An appetite for knowledge has never been a guarantee of compassion or of respect for the wishes of the dead.

In the years since Hunter, though, these concerns have genuinely advanced. We’re better at saying where life ends and better at honoring the physical remains and the last wishes of our fellows—which is not to say there isn’t still room for improvement. For many decades, we accepted that people died when their heart stopped beating, that is, when it stopped circulating blood. Why did we hold onto that notion, even long after we understood that electrical activity was the fundamental substrate for our lives? Once more, the limitation was partly technical—a heartbeat is relatively easy to detect—and partly not. The idea that blood was the stuff of life lingered on, aided by the dual meaning of “heart” it helped bequeath to our language and our thoughts. Did the body Christison saw being opened still have a beating heart? Was it, in any real way, alive? It certainly was in Cristison’s eyes, but whether it would have been in ours is harder to say.

Once we became confident about the primacy of electrical activity in the brain as the sign of life, we were able to be more positive. The need for donated organs pushed changes in our definition of death, especially because an organ-transplant recipient’s prospects for survival are much better when the organ is taken from a donor with a beating heart. In 1968, the wonderfully named Ad Hoc Committee of Harvard Medical School argued that death should no longer be regarded as occurring when the heart stopped, but when electrical activity ceased in the brain. Once that was gone, so was the person.

Janet Radcliffe Richards has a title almost as delightful as that Harvard committee’s: She’s the Oxford Professor of Practical Philosophy. (Her colleagues, presumably, feel their contrasting impracticality to be so obvious as to not be worth mentioning.) Her new book, The Ethics of Transplants: Why Careless Thought Costs Lives, argues that it’s not only historical societies that have gotten the definition of death wrong. She thinks that’s still happening now. She argues that we’re making two main mistakes. The first is failing to admit that technology changes. Christison’s body might not have had a beating heart, for example, but it could have possibly regained one with swift resuscitation. Hanging could often cause asphyxiation in a way that we could now reverse once the noose was off. People were undoubtedly once declared dead when we would today recognize them as being moribund but alive and salvageable. Progress will continue. Even if those who have themselves frozen in the hope of future advances are wrong and we’re never able to revive them, it’s likely to be true that some of those whose lives now seem irreversibly closed might, to our descendants, seem merely in need of medical assistance.

The second mistake she thinks we’re making is more surprising and more important. We’re just worrying too much about this sort of stuff. We’re fretting too much over the wrong things. There may be a tiny chance of someone being, in some way that matters, alive, but we don’t wait for putrefaction in order to be 100 percent certain, particularly when a loved one has expressed a wish to help others by donating organs. In other areas, we find it easy to accept that risks need to be balanced. We’d all agree that life itself is more important than getting quickly to work, yet few would limit all vehicles to a maximum of five miles per hour to reduce road deaths. Rare disasters become tolerable when the alternative is wasting huge swaths of our lives avoiding them.

Radcliffe Richards’s excellent book delights in challenging thoughtless opinions. It is mainly unexamined prejudice, she argues, that makes us so set against the idea of buying and selling organs. As someone who thought that all organ donations needed to be altruistic, it was a bracing experience to read her demolishing every belief I had on the subject. Practical philosophers, like 18th-century surgeons, seem to not shy away from difficult subjects.

Death, Radcliffe Richards declares, should be regarded as occurring when someone seems to have become irreversibly unconscious. We should be as confident as possible, but we need to accept that we can never be entirely certain. (Currently, the definition of brain death varies from state to state and country to country.) Imagine a woman has a head injury, a woman who recorded her wish to be an organ donor. Her family wants to honor that wish. If she is terminally unconscious, why wait even for the death of her brain? Why use machines to prop up her other organs while a predictable decline takes place? Why not, instead, remove her organs the moment it becomes clear she has become unconscious and has no realistic likelihood of improving?

In an astonishing recent article in Discover magazine, writer Dick Teresi suggested that equating one’s death to the death of one’s brain was a moral and philosophical failure. The story argued for a return to a notion of death that ignored the last 5,000 years of acquired knowledge. “Beating-heart cadavers,” Teresi wrote, “were created as a kind of subspecies designed specifically to keep organs fresh for their future owners.” The article, which if it discourages people from organ donation will cause deaths, insinuates that a definition with uncertain boundaries is a bad one. It ends by listing examples of cases were brain death may not have been properly established. The implication is that those who try and tackle ambiguities and difficulties are sly, suspicious, and probably dishonest. It’s a nasty way of looking at the world, which in reality is unavoidably packed with ambiguities and difficulties.

You either struggle to face up to ambiguities as best you can or you stand dishonorably by. If we reject the notion of brain death and ban beating-heart donors, we can avoid mistakes that will, despite all caution, very occasionally happen. But that’s a cheap and shameful way to approach truth, let alone to take responsibility for the care of those who need organs and those who wish to donate them, and it’s not the way doctors are trained to behave. Medicine, like much else in life, is about thoughtfully making the best choices you can, knowing that none are perfect. We could abolish all medical errors tomorrow—and there are thousands of them—by banning medicine outright. Or we can accept that mistakes are unavoidable, and strive to make fewer of them. We can refuse to be paralyzed by the terror of making any mistakes at all, refuse to torture ourselves with the foul delusion that perfection is possible. Two hundred years ago, the doctors who stole bodies defended their actions on the basis that, horrible as it all appeared and horrible as in some ways it genuinely was, learning anatomy was important: It saved lives. Remember the dead, they said, but don’t forget the living. Radcliffe Richards argues we still need to learn that lesson today. Discover, for all the wrong reasons, proves her point.