It looked like a headline from the Onion, but it was from CNN and the story was real: "Missile Misses Target, Officials Call It a Success." The Pentagon's Missile Defense Agency had conducted a test the afternoon of June 18. A Standard Missile-3, fired from a Navy cruiser 160 miles off the Hawaiian island of Kauai, tried—but failed—to intercept a target missile that had been launched a few minutes earlier from the island's test range. And so it seemed another setback had afflicted President Bush's most cherished military program.
However, the Missile Defense Agency's spokesman, Chris Taylor, saw the test differently. "I wouldn't call it a failure," he told CNN, "because the intercept was not the primary objective. It's still considered a success, in that we gained great engineering data. We just don't know why it didn't hit."
Oh, it's hard to be a satirist these days.
The thing is, Taylor's reasoning is common in the Pentagon, and always has been, for tests not just of the missile-defense program but of all weapons programs.
Officials planning a test usually divide it into several discrete phases. If only one of the phases goes off successfully, and if the others at least yield some interesting data, then the test is marked down as a "success" or, if it was an almost (but not quite) total failure, a "partial success." In the June 18 missile defense test, these phases would have included a) launching the test missile; b) detecting and tracking the target-missile in midflight; c) transmitting information about the target back to control panels on the ship; and d) intercepting the target missile.
The system passed a) through c) with flying colors. Three out of four isn't bad. Call it "success." That's what happened, even though the point of missile defense is to intercept missiles. In fact, the specific aim of this test was to assess a new solid-state enginefor the interceptor's guidance system. It now appears that the two missiles didn't collide because the engine malfunctioned. In other words, by any serious measure, broad or narrow, the test was an abject failure, regardless of how the Pentagon grades it.
"This happens all the time," one Pentagon official told me with a sigh. "It's incredible."
Just recently, the Air Force tested a new type of air-to-air missile for its F-22 stealth fighter plane. The missile missed its target by a long shot, but its firing mechanism worked, so the test was counted as a "success."
The problem with this practice is that, when it comes time to decide whether to move ahead on a particular weapons program, an assistant secretary or deputy chief of staff, not having time to study the raw test data, will look at the summary report. The sheet will say, "Eight successes, three partial successes, one failure." That will seem pretty good, and the program will graduate to the next stage of development. At some point, the flaws might get ironed out in the field, but at great cost, not only financial but—if the weapon has to be used on the battlefield in the meantime—strategic and human.
Of course, the Pentagon's standard of success in testing is not entirely ridiculous. In the early stages of a weapon's R & D, especially if the program involves advanced technology, there is real value in learning practically anything about its performance. If one part of the test fails but the other parts work fine, it might legitimately be called a success. However, President Bush plans to start deploying the missile-defense program in the fall of 2004. In order to do so, he formally withdrew * from the 30-year-old Anti-Ballistic-Missile Treaty. He has requested, and Congress has approved, $9.1 billion for the program next year, and he plans to ask for more than $10 billion the following year. Either the tests should be judged by the standards of an advanced program, or the program should be scaled back to what it really is, despite its advocates' fervent efforts: an interesting but still quite primitive research project.