Defense Media Network

The Dark Side of Unmanned Systems Autonomy

Where is increased autonomy for military unmanned systems leading?

The Dark Side of Autonomy?

“Open the pod bay doors, HAL.”
“I’m sorry, Dave. I’m afraid I can’t do that.”

– From Stanley Kubrick’s 2001: A Space Odyssey

While few today fear that a 21st century HAL will turn on its masters, the issues involved with fielding increasingly autonomous UxS are complex, challenging, and increasingly contentious. While advancing other aspects of UxS improvements in areas such as propulsion, payload, stealth, speed, endurance, and other attributes are – and will remain – important, coming to grips with how much autonomy is enough and how much may be too much, is arguably the most important issue we need to address with unmanned systems over the next decade.

A large part of this autonomy for UxS resides in their ability to “sense and adapt.” This will enable UxS to achieve much greater speed in decision-making than is currently possible, and allow “blue forces” to act within an adversary’s OODA (observe, orient, decide, and act) loop. Thus, as the environment and/or mission changes in unpredictable ways, the ability to sense and adapt will allow UxS to find the optimal solution for achieving their mission without the need to rely on constant human operator oversight, input, and decision-making. But are we ready for UxS to operate without our decision-making, to operate inside our OODA loops?

The ability to sense and adapt will allow UxS to find the optimal solution for achieving their mission without the need to rely on constant human operator oversight, input, and decision-making.

In an article entitled, “Morals and the Machine,” The Economist addressed the issue of autonomy and humans-in-the-loop this way:

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming – or at least appearing to assume – moral agency. Weapons systems currently have human operators “in the loop,” but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously. As that happens, they will be presented with ethical dilemmas. … More collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.

Legged Squad Support System (LS3)

The Legged Squad Support System (LS3) four-legged robot. DARPA photo

Bill Keller put the issue of autonomy for UxS this way in his op-ed, “Smart Drones,” in the New York Times in March 2013:

If you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger. Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum – scientific, military and commercial – is propelling us toward the day when we cede the same lethal authority to software.

The DoD is taking the issue of human control of UxS seriously and is beginning to issue policy to ensure that humans do remain in the OODA loop. A November 2012 directive by Deputy Secretary of Defense Ashton Carter issued the following guidance:

Human input and ongoing verification are required for autonomous and semi-autonomous weapon systems to help prevent unintended engagements. These systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. Humans who authorize the use of, or operate these systems, must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules and applicable rules of engagement. An autonomous system is defined as a weapon system that, once activated, can select and engage targets without further intervention by a human operator.

These are the kinds of directives and discussions that are – and should be – part of the dialogue between and among policy makers, military leaders, industry, academia, and the science and technology community as the design and operation of tomorrow’s UxS are thoughtfully considered. This is not a trivial pursuit and – in Albert Einstein’s words – will require a new way of “figuring out how to think about the problem.” And importantly, most informed discussion begins with the premise that adversaries who intend to use UxS against our interests will not be inhibited by the kinds of legal, ethical, and moral strictures to which the United States currently adheres.

The manned F-35 Lightning has 10 billion lines of computer code – and counting – and there is human supervision by the pilot! How many lines of code will need to be built into an unmanned system to get the balance of autonomy and human interaction “just right?”

For these reasons, further discussions and debate on UxS issues are crucial if we envision unmanned systems as warfighting tools – and indeed as warfighter’s partners – in the increasingly challenging future security environment. Industry must be part of these discussions. As Lt. Gen. David Deptula suggested, “The challenge before us is to transform today to dominate an operational environment that has yet to evolve, and to counter adversaries who have yet to materialize.” UxS will be central to confronting this challenge.

 

Designing in the Right Degree of Autonomy

The capabilities required to find this “just right” balance must leverage many technologies that are still emerging.

As the services look to achieve the right balance of autonomy and human interaction – to balance these two often-opposing forces and get them “just right” – in their efforts to push UxS capabilities to the cutting-edge, they must turn to industry for innovative solutions.

The capabilities required to find this “just right” balance must leverage many technologies that are still emerging. But few companies have the discretionary research and development funds to keep running down blind alleys in their pursuit of capabilities that the services know they need – but as of yet only dimly perceive. Without putting too fine a point on it, the military knows what it wants to achieve, but not what technologies or even capabilities it needs to field UxS with the right balance of autonomy and human interaction. The Defense Science Board report, “The Role of Autonomy in DoD Systems,” put it this way:

Instead of viewing autonomy as an intrinsic property of unmanned systems in isolation, the design and operation of unmanned systems needs to be considered in terms of human-systems collaboration. … A key challenge for operators is maintaining the human-machine collaboration needed to execute their mission, which is frequently handicapped by poor design. … A key challenge facing unmanned systems developers is the move from a hardware-oriented, vehicle-centric development and acquisition process to one that emphasizes the primacy of software in creating autonomy.

It is important for industry – and all of industry, not just UxS vehicle manufacturers – to focus on reports like this one, for the issue of “the primacy of software” is one that deserves special consideration. The manned F-35 Lightning has 10 billion lines of computer code – and counting – and there is human supervision by the pilot! How many lines of code will need to be built into an unmanned system to get the balance of autonomy and human interaction “just right?”

While there is no point-solution or easy answer to this challenge, there are some trend lines industry can leverage to invest R&D dollars so they can ultimately produce UxS the services will embrace and, indeed, not be able to live without. The focus for industry in the next decade-plus should be to:

  • Make the command, control, communications, and computers (C4) architecture a priority in UxS development
  • Build in a “sense and adapt” capability in all UxS
  • Concurrently develop concepts of operations (CONOPS) and tactics, techniques and procedures for each UxS
  • Leverage queuing theory to enable UxS to balk or renege on a mission
  • Develop target recognition algorithms that are on a par with those of manned systems
  • Develop anticipatory intelligence and decision support software into UxS

The last point regarding decision support software is one where the “unmanned community” has yet to leverage the cutting-edge technology that already populates military command centers. For the relatively small numbers of UxS that will engage an enemy with a weapon, this is crucial. Prior to firing a weapon, the unmanned platform need only provide the operator – and there must be an operator in the loop – with a “pros and cons” decision matrix regarding what that firing might entail. When we build that capability into UxS we will, indeed, have gotten it “just right.”

This article was first published in Defense: Fall 2013 Edition.

Prev Page 1 2 3 Next Page

By

Captain George Galdorisi is a career naval aviator. He began his writing career in 1978...