Defense Media Network

TechWatch: The Dual Use Dilemma

Throughout the Cold War, the U.S. military and NASA were two of the world’s primary technology drivers. Many of the devices and capabilities taken for granted in civilian life today began as government R&D efforts, typically intended to keep the United States ahead of the Soviet Union – and anyone else.

A keystone to that effort has been the Defense Advanced Research Projects Agency (DARPA), created shortly after the USSR became the first nation in space with the launch of Sputnik in 1957. DARPA was America’s first space agency, although two other new organizations quickly joined it, with primary responsibility for military and civilian space programs – the Army Space Command and NASA.

Even so, DARPA – which played a central role in the nation’s first successful space launch vehicles and satellites – continued to push the technology envelope behind the scenes. That was part of its central mandate: Make certain the United States never again experiences technological surprise. DARPA quickly determined the best way to do that was to be the origin of all future technology breakthroughs.

The agency also was given a unique status among government research facilities – in the United States or elsewhere: It was permitted to pursue research that might not see practical application for decades. And it was permitted to fail, without repercussion. This status allowed DARPA to push forward on technologies in which the presumed primary users often disavowed any interest or support – at least, in the beginning.

Among those technologies were stealth, unmanned aerial vehicles (UAVs), supersonic and hypersonic flight, GPS and many others now considered key to U.S. military dominance. After the collapse of the Soviet Union – leaving no other immediate threat to U.S. technological leadership – many DARPA technologies went public, including UAVs, high-speed flight, GPS, the Internet, and numerous others.

But the end of the Cold War also coincided with a massive surge in civilian use and further development of technologies that began with DARPA, other military or defense contractor labs, and NASA. Almost overnight, it seemed, the government went from a primary technology driver to a minor technology user – minor in terms of quantity and money, at least.

As the 20th century came to a close, DARPA program office managers warned the contractor community not to bother bringing the agency a new technology development proposal unless they already had a plan for its commercial use, as well. The reason was simple: Neither DoD nor any other government agency could continue to pay for the development of proprietary technologies, especially in a world in which even the most advanced of those moved through a new generation every 18-to-36 months.

Given that the time from concept to field testing on most major military projects was 7-to-10 years, then another five or so to initial operational capability, followed by an expectation of 15-to-30 or more years in service, such rapid technological obsolescence was beyond the government’s funding ability.

But turning increasingly to using commercial-off-the-shelf (COTS) components, many requiring some level of “militarization” to meet DoD environmental and operational requirements, required a massive restructuring of military programs. To accommodate the rapid, constant rate of advance and change, all new designs had to be “open architecture” – both hardware and software – with sufficient flexibility to deal with changes in size, weight, power, cooling, connectivity, interoperability and the ability to work alongside less advanced “legacy” equipment, in both U.S. and allied inventories.

It also required maintaining a capability level within government R&D labs to use technologies available to anyone in the commercial marketplace in such a way that the U.S. application would remain at least one generation ahead of any potential adversary.

The 21st century dawned with a host of new pressures on the world’s only remaining superpower – changing from plans to fight a traditional nation-on-nation engagement to an asymmetric war that combined nation-building with counterinsurgency and urban combat. It meant a new kind of enemy, who wear no uniforms, are committed to no nation, and combine low-tech and high-tech in an ever-changing approach to combat.

It also brought ever-greater pressures to bear on the concept of “dual use” technology.

In the late 1990s, at a conference of military and industry technology leaders, a U.S. Marine colonel sounded a warning that, at the time, evoked some laughter from his audience: “The day is fast approaching when COTS will bite us in the ass.”

As time has since demonstrated, he was right, probably in more ways than even he expected.

In coming articles, TechWatch will examine technologies that have moved from government creation to civilian adaptation, from commercial development to military implementation – including use by individual warfighters to improve their lives on deployment – the difficulties of militarizing civilian components and dealing with the widely differing operational lifetime requirements of the two marketplaces.

It is a dual use dilemma affecting every nation, but perhaps none so directly as the United States, especially in a new age in which a single opponent has been replaced by multiple current and potential adversaries, including nations already challenging the U.S. in new technology development and production as they begin to stake out their own claims as regional “great powers” with possible global superpower ambitions.

At a time in which the most likely scenario is no longer massed tanks in a confrontation along the Fulda Gap but mass service disruptions in cyberspace, technology dominance has become the primary definer of power. The focus of TechWatch will be to keep tabs on the who, what, when, where, how and why of that ongoing battle.

By

J.R. Wilson has been a full-time freelance writer, focusing primarily on aerospace, defense and high...