Technology

How Our Response to Y2K Reveals What We’ll Do About Global Warming and Swine Flu

In 1993, a tech consultant named Peter de Jager wrote an article for Computerworld with the headline “Doomsday 2000.” When the clock struck midnight on 1/1/00, he wrote, many of our computers would lose track of the date, and very bad things would happen as a result.

Looking back, De Jager’s article is remarkable for its pessimism. He interviewed several IT experts who said the tech industry was completely ignoring the computer-date bug. Many didn’t think it was a real problem, and those who did felt no pressure to do anything about it—after all, the year 2000 was a long way away. “I have spoken at association meetings and seminars, and when I ask for a show of hands of people addressing the problem, the response is underwhelming,” de Jager wrote. “If I get one in 10 respondents, I’m facing an enlightened group.”

But then something strange happened: Everyone started worrying about Y2K. Over the next few years, people across the tech industry took up the cause. In 1996, Sen. Daniel Patrick Moynihan asked the Congressional Research Service to investigate the issue, and he became alarmed by the findings. In a letter to President Clinton, Moynihan urged a huge federal response to address what he called the “Year 2000 Time Bomb.” Moynihan clearly expected the worst: “You may wish to turn to the military to take command of dealing with the problem,” he wrote to Clinton.

Bill Clinton’s second term isn’t remembered as a model of comity between the executive and legislative branches. On the issue of Y2K, though, the Republican Congress and the Democratic White House were on the same page: They all pushed for a huge federal task force. The White House appointed a Y2K coordinator, John Koskinen, who headed an effort that spanned every cabinet agency and the military. (Koskinen is now a high-ranking official at Freddie Mac.) Following the government’s lead, just about every business in the country took up the cause of heading off the Y2K crisis.

This is a hopeful story, isn’t it? Anyone who’s ever marveled at the government’s inability to address an obvious, impending threat can find solace in the Y2K narrative. In 1993, even tech people ignored Y2K; just a few years later, it had become an issue at the top of the world’s agenda. How did this happen? And does Y2K provide any lessons for dealing with other long-term national and international crises?

One the face of it, Y2K shares several features with other seemingly intractable problems. It was big, expensive to fix, and its worst effects would only be seen in the future—just like global warming or the health care mess. What’s more, from the very beginning, many wondered whether Y2K was a real problem. Though the tech consensus eventually shifted to the affirmative, there were always people on the fringes of the debate who insisted that the whole thing was overhyped (as in global warming or, more recently, H1N1).

How did the people on Y2K’s front lines overcome these hurdles? They focused on the worst. “We’re accelerating toward disaster,” de Jager wrote in 1993. In his 1996 letter to Clinton, Moynihan frets about the Social Security Administration and the IRS’ continued ability to function, worries that banks would need to spend billions to address the problem, and suggests that the nation’s economy may spiral out of control if the problem isn’t fixed by 1999. If the proponents of fixing the problem acknowledged the naysayers, it was usually only to swat them down with variations on an adage that was hard to rebut: An ounce of prevention is worth a pound of cure.

Calling for extraordinary measures to prevent potential disaster is a well-known rhetorical tactic in the environmental movement. The precautionary principle holds that even in the face of scientific uncertainty, society should take action to minimize the harm of threats that cross a certain threshold of danger. Usually, it’s scientists and tree huggers who summon the precautionary principle—Y2K was one of the few times we saw government officials and corporate leaders do so.

In a paper exploring how this came to be, Aidan Davison and John Phillimore, environmental researchers in Australia, argue that there were a few reasons Y2K was a particularly easy sell. First, it was a discrete event—computers had to be fixed by Jan. 1, 2000, and after that, the problem would be over. Political systems across the world usually deal with such one-time calamities better than they handle long-running problems like global warming or health care, the researchers say. Moreover, Y2K affected an industry that is used to rapid change. Computers are always being patched and upgraded. Fixing Y2K would only mean speeding up that natural process, not bringing a whole new level of innovation into an entrenched industry, which is the problem we face in addressing climate change.

Davison says the most important difference between Y2K and global warming is the cultural attitude surrounding each case. Y2K never became a moral issue. “It was always framed as a simple design error,” Davison says—nobody fingered it as the consequence of our reliance on digital technology or argued that the way to get out of this mess was to get rid of computers. The debate over climate change, meanwhile, has always been as much a social and political argument as a scientific one. “Climate change brings into view questions about modern society in general,” the Australian scientist says. It’s not just a question of what fuel we should use to power the planet—there are questions about where we should live, what we should eat, how we should travel. “It’s become a general debate over modernity itself,” Davison says.

Because of this fundamental distinction, Davison doubts that the lessons from Y2K will have much resonance when it comes to global warming. Indeed, in a perverse way, the planet’s success in fighting Y2K might actually hamper anti-global-warming efforts. Just look at how people reacted when nothing much went wrong after Jan. 1, 2000. They concluded that the whole thing had been a ruse.

Logically, this makes no sense—the fact that there were few problems on New Year’s Day could just as easily have meant that the effort to fix Y2K had worked. “But that’s the political problem with the precautionary principle,” Davison says. “If you’re successful in avoiding a problem, you then don’t have the evidence that you’ve been successful.” Say we go through flu season and see relatively few deaths from H1N1. Does that mean that swine flu was overhyped or that the massive vaccine program worked? Each side in the debate will be free to draw its own conclusions—and you can be sure they will.

But that’s not the worst of it. Y2K is now a YouTube punch line. (I must confess, this Leonard Nimoy video is pretty funny.) Search the news for Y2K and you see it come up in articles about the end-of-the-world movie 2012—as a knowing warning against listening to cranks. In other words, success has bred apathy: The fact that nothing terrible happened in Y2K is now an argument for not doing much about global warming or other threats. And perhaps that’s the computer bug’s most lasting legacy. Y2K wasn’t the end of the world. But the fact that we fixed it may make it harder to fix anything else in the future.