War Stories

Err War

The Army buries its mistakes.

Back in Soviet times, there was a Russian army general who liked to bellow, “Analysis is for lieutenants and women.” This brute-force approach to military matters didn’t serve the Soviet Union well in the long run. Unfortunately, the same attitude seems to be creeping into the U.S. Army today.

Two pieces of evidence shine all too glaringly: 1) an official, unclassified, and highly critical report on the U.S. Army’s inefficient-to-shoddy intelligence practices in Iraq and Afghanistan, written by the Center for Army Lessons Learned in Ft. Leavenworth, Kan.; and 2) the removal of this report from the center’s Web site, after the Washington Post published a story summarizing its contents.

The report and its suppression make clear that, in pre-war training, combat deployment, and after-action assessments, the Army hierarchy in the field and the political hierarchy in Washington devote woefully scant resources to analysis of what they’re doing—and that they hold the analysts themselves in contempt, sometimes lethally so.

Administration officials and their critics agree that better intelligence is needed to deal with the guerrilla war that’s escalating daily in Iraq. So the release, late last month, of the “lessons-learned” report (which you can still read on the globalsecurity.org Web site) certainly dealt a shock. Some key findings:

  • The 69 U.S. tactical human-intelligence (HUMINT) teams in Iraq were expected to produce at least 120 “information intelligence reports” a day, but they’ve been putting out, on average, just 30—”not because of the lack of activity but because of the lack of guidance and focus” from their superiors.
  • Most of these superiors are junior military intelligence officers who “did not appear to be prepared for tactical assignments.” Even captains “lacked advanced analytical capabilities.”
  • HUMINT databases were stored on separate computer systems, many of them loaded with incompatible software, none of them connected in such a way that the data could be shared. As the report dryly puts it, “Connectivity between the terminals was non-existent, and had an adverse effect on HUMINT mission capability.”

Other phrases that pop up repeatedly in the report: “very little to no analytical skills,” “lacked the foundations of collective management,” “junior officers who had no formal training,” “information overflow,” “no internal analysis capability,” “lack of competent interpreters,” “no ability to analyze the information,” and so forth.

The report also finds that HUMINT personnel were (and, one Pentagon official tells me, still are) often ordered to take part in four-man units that kick down doors and raid buildings. The report notes that it’s a bad idea to use spies in this way: “THTs [tactical HUMINT teams] rely on the rapport they generate with the local population and their ability to collect information. Putting them on a door-kicker team ruins that rapport.”

But here comes the killer (literally). The report adds, in wryly understated parentheses, that when HUMINT agents were assigned with a door-kicker team, “they were usually the #2 man, who statistically is the person who gets shot.” (Italics added.)

In other words, intelligence-gathering and intelligence-analysis teams are held in such low esteem that they’re supplied with mismatched computer systems, they’re manned by junior officers (or more senior officers who’ve received little training), they’re assigned to risky raid operations that have nothing to do with their missions, and, as if to place an exclamation point on their dispensability, they’re put in the raid-team’s most dangerous slot.

Why is this happening? Certainly no one in a position of authority explicitly wants to put spies at greater risk or to withhold resources from those who analyze the information that spies gather. A likely explanation is simply that the incentives—the systems of rewards and penalties that govern modern military life—are geared elsewhere.

Careers tend to be advanced on the battlefield or in the chain of big-ticket weapons procurement—not in the shadows of conflict or under the green eyeshade of “support analysis.”

The Army has long been dominated by armor and artillery, and decisions about its budgets, missions, and priorities tend to be made by officers who rose through the ranks during the Cold War as commanders of armored divisions. A shift has begun to take place, as high-tech munitions and surveillance systems come into the arsenal, and as “rogue regimes” and terrorists replace the Soviet Union and China as the leading threats—but, at least as it affects military institutions, this shift is still in its early phase. (One tangible sign of a shift is the recent appointment of Gen. Peter J. Schoomaker, a former Special Forces commander, as the Army’s chief of staff; but even he can push through reforms only so far in the short run.)

And so, as we have seen in Iraq and Afghanistan, the Army has made great advances in its ability to fight and maneuver with speed and precision on the battlefield—but it’s not very good at building the peace, or ensuring security, afterward. In fact, it’s regressed at this art. In World War II, the U.S. Army had an enormous civil-affairs department, which started planning for the postwar occupation of Germany in 1942—three years before the war ended. Today, the U.S. Army has almost no civil-affairs branch—”peacekeeping” is not rewarded, in budgets or promotions—and so, in Gulf War II, there was, stunningly, no planning for postwar occupation at all.

Similarly, the Army has done much to incorporate the new high-tech weapons and sensors into its arsenal, but, as the lessons-learned report reveals, it has done little to incorporate them into its training programs, support budgets, or promotional practices—in short, its system of incentives, of rewards and penalties.

The Army will feel no great pressure to change this system as long as the Pentagon and the White House keep critical reports such as this one out of the public eye. The suppression of this report is but a piece of a broad administration policy that views secrecy as a default mode. After the Post published an excerpt of the Center for Army Lessons Learned report, the Army shut down the center’s entire Web site. (It has since come back on, but the report was deleted.) The Pentagon recently removed from all public sites the membership list of its advisory Defense Science Board. The official reason is to protect the members from possible terrorist attacks. But this is nonsense. Surely the upper echelon of actual Pentagon officials would be more likely targets, but their names are public. (A more likely, if cynical motive, is to keep secret the board’s corporate affiliations; Richard Perle, after all, was forced to step down as chairman of the Defense Policy Board—but not resign from the board—after press reports revealed financial conflicts of interest. *)

This is too bad for many reasons. Even high-ranking administration officials are realizing that their policies—both for the occupation of Iraq and for the war on terrorism—are floundering. Donald Rumsfeld’s leaked memo, earlier this month, was a cri de coeur for fresh ideas. But nothing fresh can flow if they keep circling the wagons, shutting down Web sites, cutting off criticism, and disdaining analysis.

Correction, Nov. 3, 2003: This piece originally described Richard Perle simply as “chairman,” which may have left the impression that he was affiliated with the Defense Science Board, which was mentioned in the same paragraph. In fact, he was chairman of the Defense Policy Board. Return to the corrected sentence.