The U.S. military is rushing to deploy solar-energy equipment on the battlefields of Afghanistan, according to Elizabeth Rosenthal's story in the Oct. 5 New York Times, and, though the article doesn't say so, this could prove to be a huge boon—even a turning point—for the fate of renewable energy here at home.
There are two main motives for the military's push, neither having anything to do with green consciousness. First, transporting fossil fuel to the landlocked front lines is hideously expensive: The Army and Marines pay only $1 a gallon for the fuel itself but up to $400 a gallon for the truck convoys that move it through Pakistan and up the Khyber Pass. Second, security along these roads is tenuous. Last week, militants blew up one such convoy; in the past three months, six Marines escorting the convoys have been killed.
One Marine unit is now setting up portable solar panels, solar tent shields, solar-powered rechargers, and other energy-saving gear, with other units soon to follow. Meanwhile, the Navy last year deployed the first amphibious ship powered by electricity instead of fossil fuels—saving 900,000 gallons of fuel on its maiden voyage from Mississippi to San Diego. The Navy is also experimenting with fuel made from algae. And the Air Force is planning to convert its entire fleet of airplanes to run on a jet fuel-biofuel mix.
This is all interesting in its own right, but the decisive impact may be on the civilian energy economy.
In the last half-century, many of the United States' great technological breakthroughs have been made possible because of the demand created by large-scale government projects—which, in this country, has mainly meant military and space projects.
For example, the microchip—the building block of the digital revolution—was introduced by Texas Instruments at the March 1959 radio engineers' trade show. But it took off as a viable commercial product only after President John F. Kennedy pledged to send a manned spacecraft to the moon and after he and his defense secretary, Robert McNamara, funded the Minuteman II intercontinental ballistic missile.
The microchips made these programs possible. Conventional transistors would have been too big, heavy, and hot to fit inside those rockets' nose cones and power their guidance systems; and simply wiring together all the circuitry by hand would have been prohibitively expensive.
But, more to the point, those programs made the microchips possible, too. NASA's rockets and the Air Force's Minuteman missiles created a demand for the chips that otherwise did not exist. The large-scale production yielded economies of scale, which lowered the chips' price, to the point where manufacturers could order them for commercial goods, which boosted production and thus lowered costs still further, and on and on the cycle continued.
In 1961, when Kennedy announced the manned space program and the Minuteman missile, a single microchip cost $32. By 1971, the cost had plunged to $1.25. (By 2000, it dropped to under a nickel.) Without that initial spur of demand from the government, the chip might have vanished, along with who knows how many other technological wonders that never took off because they cost too much—and the world today would be incalculably different.
The same can be said of data-processing computers. In 1950, only 20 computers existed in the entire United States; most of them were being used by the military, mainly the nuclear weapons laboratories. Even by 1954, only one private company, General Electric, had ordered a computer—the UNIVAC, or Universal American Computer, a room-sized monstrosity, powered by 8,000 vacuum tubes, built by the long-defunct Eckert-Mauchly Computer Corp.