Defense Media Network

Artificial Intelligence

Trusted Teammates Wanted

The pressure is on. In the summer of 2017, the Chinese government released a strategy detailing its plan to take the lead in the development of artificial intelligence (AI) by 2030. Weeks later, Russian President Vladimir Putin announced his country’s pursuit of AI technologies, proclaiming, “Whoever becomes the leader in this sphere will become the ruler of the world.”

The Department of Defense (DOD), a leader in researching and developing AI technology, countered swiftly to strengthen and streamline its efforts to define the future of AI. DOD’s 2019 budget authorization established a Joint Artificial Intelligence Center (JAIC) to “coordinate the efforts of the Department to develop, mature, and transition artificial intelligence technologies into operational use,” and a National Security Commission on Artificial Intelligence, composed in part by executives from major American technology firms, to advise the JAIC on the national security implications of advances and trends in AI and related technologies. The DOD has undertaken a comprehensive assessment of defense-relevant AI technologies.

In early 2019, the Pentagon released a brief summary of its classified Artificial Intelligence Strategy, a vision for capitalizing on the “opportunity to improve support for and protection of U.S. service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.” The strategy, which identifies the JAIC as its focal point, consists of five key approaches: 

  • delivering AI-enabled capabilities that address key missions; 
  • scaling AI’s impact across DOD through a common foundation that enables decentralized development and experimentation; 
  • cultivating a leading AI workforce; 
  • engaging with commercial, academic, and international allies and partners; and
  • leading in military ethics and AI safety. 

The AI Strategy summary also mentions the importance of the Defense Innovation Unit (DIU), a government entity established in 2016 to fast-track the adoption of commercial technology in the military to strengthen national security. Much of the nation’s AI advances occur first in the private sector, and the DIU works with companies to prototype commercial solutions to military problems. 

U.S. companies have been at the leading edge in developing and fielding artificial intelligence: the ability of machines to do things that normally require human intelligence. Early “expert systems,” based on specific rules and bodies of knowledge, are still useful in some forms today, such as tax preparation software. American companies have also led the development of “second wave” AI applications with machine learning capabilities – algorithms that help a computer learn from experience. These second-wave systems are trained to recognize patterns in large pools of data and make decisions based on statistical probabilities. Second-wave applications abound in today’s world, from self-driving vehicles, facial recognition software, and route mapping applications, to personal assistants such as Google Assistant, Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana.

The Next Generation Combat Vehicle’s manned-unmanned teaming concept would leverage a protected tether between the NGCV Optionally Manned Fighting Vehicle and the Robotic Combat Vehicle in order to provide soldiers with the capability to safely engage in combat via remotely controlled autonomous systems.

The Next Generation Combat Vehicle’s manned-unmanned teaming concept would leverage a protected tether between the NGCV Optionally Manned Fighting Vehicle and the Robotic Combat Vehicle in order to provide soldiers with the capability to safely engage in combat via remotely controlled autonomous systems.

While many of the DOD’s recent AI-related moves are forward-looking, artificial intelligence and machine learning are already integrated into military planning and operations, with systems either fielded or under development – or both – in a variety of applications, including: 

Intelligence, Surveillance, and Reconnaissance (ISR). The large data sets gathered by the nation’s ISR apparatus make it an area of particular promise for automating the work of human analysts who spend hours sifting through video and other information sources in search of actionable information. Project Maven, launched in the spring of 2017 by the DOD’s newly formed Algorithmic Warfare Cross-Functional Team, is envisioned as a system that will use machine learning and AI to differentiate among people and objects captured in thousands of hours of aerial drone footage.

Cybersecurity. AI tools can help to identify threats and breaches, potential “traps” or tools for implanting malware, and also neutralize cyber threats before they can be activated on military systems. Unlike conventional cybersecurity tools, which look for historical matches to known malicious code, intelligent tools can be trained to detect anomalies in broader patterns of network activity.

Logistics and Maintenance. Meticulous study and knowledge of records can alert military personnel to the need for certain tasks such as resupplying a depot or repairing a machine. The Air Force, for example, is beginning to use AI to tailor maintenance schedules to individual F-35 aircraft, rather than use a fleet-wide schedule. The Automated Logistics Information System extracts sensor data from the aircraft’s engines and onboard systems and feeds that data into algorithms that predict when technicians need to inspect a system or replace a part. The Army is taking a similar approach to develop tailored maintenance schedules for its Stryker fleet.

A U.S. Air Force F-35A Lightning II assigned to Hill Air Force Base, Utah, conducts a training flight with F-16 Fighting Falcons assigned to Kunsan Air Base, Republic of Korea, over the city of Gunsan, ROK, Dec. 1, 2017. The Air Force Research Laboratory is looking into pairing unmanned older generation fighters such as the F-16 with fifth- generation fighters like the F-35.

A U.S. Air Force F-35A Lightning II assigned to Hill Air Force Base, Utah, conducts a training flight with F-16 Fighting Falcons assigned to Kunsan Air Base, Republic of Korea, over the city of Gunsan, ROK, Dec. 1, 2017. The Air Force Research Laboratory is looking into pairing unmanned older generation fighters such as the F-16 with fifth-generation fighters like the F-35.

Predictive Analytics. A core capability of today’s second-wave AI systems is identifying trends and patterns and then predicting the likelihood of a certain event based on those patterns. Predictive analytics software, which has been used for years by federal and regional law enforcement agencies, was integrated into the Marine Corps’ Afghanistan operations in 2011 and used to create lists of possible bomb-makers aiding insurgents.

Autonomous or Semi-autonomous Vehicles. While there is no true “self-driving” vehicle yet, each service branch has fielded and continues to research vehicles on and under water, in the air, and on the ground, with varying degrees of autonomy. 

In the spring of last year, after Michael Griffin, under secretary of defense for research and engineering, told lawmakers that 52 percent of combat casualties were attributed to personnel delivering food, fuel, and other logistics, he vowed that self-driving vehicles would appear on the battlefield before they appeared on the streets. As Army and Navy researchers continued their work to develop an unmanned joint tactical aerial resupply vehicle, the Army’s Next Generation Combat Vehicle (NGCV) Cross-Functional Team announced that its top priority would be to replace the Bradley Fighting Vehicle with an Optionally Manned Fighting Vehicle (OMFV).  

The Air Force Research Laboratory recently completed the second phase of evaluations that paired an unmanned older-generation fighter jet as a robot wingman to a piloted F-35 or F-22. In the spring of 2019, the Navy formed an experimental unit, Surface Development Squadron ONE (SURFDEVRON ONE) that teams crewed and robot ships, in part to accelerate the testing of new technologies. 

Autonomous Weapon and Targeting Systems. “Smart” weapons have their roots in the radio-guided bombs and missiles developed during and after World War II. Today’s autonomous weapon platforms use computer vision to identify and track targets, and missile targeting software is being developed to deploy countermeasures or perform evasive maneuvers. There are no autonomous weapon platforms being designed to fire ordnance without the express approval of a human operator.

The AI systems in use today are useful tools that reduce the burden of certain tasks – often tedious, repetitive, mind-numbing tasks, such as combing through mountains of data. AI and machine learning have done much to free up military personnel to do important work, but second-wave applications remain conspicuously limited, capable of performing only within the confines of narrowly defined tasks. Outside of those confines, they tend to get lost, and sometimes even fail within those confines when confronted with new or unexpected circumstances.

Military operations – particularly the multi-domain operations envisioned by Pentagon doctrine – are fraught with new and unexpected circumstances, and military AI researchers, even as they work to develop second-wave systems that can be fielded within the next several years, are intently focused on solutions that can usher in a “third wave” of AI systems that will be trusted teammates, rather than data-mining tools. 

The Army Futures Command is investigating the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments.

The Army Futures Command is investigating the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments.

 

Second-Wave Gaps: What the Military Needs from AI 

Virtually every AI system used and under development illustrates the difference between what AI can do today and what the military would like it to do in the future; for example, the autonomous aircraft, submarines, ships, and ground vehicles used today all require some level of human supervision and interaction. 

The Army and Marine Corps have been testing prototypes of driverless ground vehicles designed to accomplish independent tasks, such as the Marines’ Multi-Utility Tactical Transport (MUTT), a remote-controlled ATV-sized vehicle capable of carrying equipment. In the coming years, the Marine Corps hopes to add increasing levels of autonomy to MUTT operations. At the U.S. Army Combat Capabilities Development Command (CCDC), Army Research Laboratory, Dr. Stuart Young leads the AI for Maneuver and Mobility initiative, a program focused on delivering a robotic teammate to Army tactical units within the next five to seven years. Young’s team is working to develop robotic vehicles that can do more than haul freight. With input from the Army’s Next Generation Combat Vehicle Cross-Functional Team, Young’s team is exploring the ability to perform autonomous maneuvers. “You can think of that as robot vehicles on the battlefield,” said Young, “teaming with soldiers and adding operational tempo in a resilient manner.” 

Currently, U.S. forces rely on dismounted warfighters to actively patrol battlespaces, often in cluttered urban areas, to identify threats and maintain security. It’s time-consuming and dangerous, exposing warfighters to significant risk. “The way the Army envisions deploying these robots would be out in front of a manned formation,” Young said – like a hunting dog. Robot scouts would locate or make initial contact with an enemy, conducting reconnaissance while keeping soldiers out of harm’s way. “You might be looking for a specific target,” Young said, “or you might just be trying to find the enemy in general and give the command options on the battlefield.”

Young imagines the earliest iterations of the robot combat vehicle will be remotely operated, and evolve into a vehicle that will go where it’s told and conduct reconnaissance. But to imagine all the situations an unsupervised robot might encounter in such a situation, and all its possible responses to these circumstances, is to begin to grasp how much work needs to be done to bridge the technological gaps that, once traversed, may enable the Army to turn a remote-controlled robot into a trusted autonomous teammate.

Dr. Tien Pham, chief scientist of the CCDC ARL Computational and Information Sciences Directorate, and ARL colleagues have made a study of these technology challenges, which are fundamental to all AI and machine learning applications, he emphasized, not just the battlefield or to autonomous vehicles: 

The Dynamic Complexity of Battlefield Data. The machine learning applications that perform best today, said Pham, are those in controlled environments, with access to copious data – and data of one particular type or modality: photos, for example, or geospatial data. “Machine learning techniques require lots of data, and well-curated data, and clean data, and labeled data,” Pham said. The data encountered by a robot teammate or software agent in battle is more likely to be what he describes as “dirty, dinky, and deceptive.” It’s likely to be degraded somewhat, communicated across a distributed network and taken in by a collection of heterogenous sensors, in various modalities: video, RF, audio, geospatial, etc. It’s likely to offer a very small sample from which to determine a response. 

Vice Adm. Rich Brown, commander, Naval Surface Force, U.S. Pacific Fleet, presents the Legion of Merit to Capt. Scott Carroll in recognition of his leadership of Commander Zumwalt Squadron ONE during its transition into the newly established Surface Development Squadron ONE at Naval Base San Diego May 22, 2019. SURFDEVRON ONE is expected to follow the innovative legacy of Zumwalt Squadron ONE and Adm. Elmo Zumwalt to integrate unmanned surface vessels (USV) and support fleet experimentation to accelerate delivery of new warfighting concepts and capabilities to the fleet.

Vice Adm. Rich Brown, commander, Naval Surface Force, U.S. Pacific Fleet, presents the Legion of Merit to Capt. Scott Carroll in recognition of his leadership of Commander Zumwalt Squadron ONE during its transition into the newly established Surface Development Squadron ONE at Naval Base San Diego May 22, 2019. SURFDEVRON ONE is expected to follow the innovative legacy of Zumwalt Squadron ONE and Adm. Elmo Zumwalt to integrate unmanned surface vessels (USV) and support fleet experimentation to accelerate delivery of new warfighting concepts and capabilities to the fleet.

“The complexities are immeasurable, when you think about it,” said Young. “Robots operating in a military environment are going to be dealing with civilians, potentially. They’re going to be dealing with dynamic adversaries that are trying to jam us or trying to spoof our communications and our AI.” Humans are good at generalizing and taking information from a cluttered world and applying it to a new situation, but traditional machine learning algorithms are brittle to change. “If they don’t have the data in their training,” Young said, “they fail miserably. And that’s not sufficient for the problem we’re trying to solve. So we’re looking at approaches like reinforcement learning, learning from demonstration – different types of learning techniques … to help improve the fundamental algorithms and then applying those fundamental algorithms to the domains that we’re worried about.”

Resource Constraints at the Point of Need. Machine learning applications usually send queries into the cloud, to access terabytes of data, but at the tactical edge of operations, often in remote areas with limited or damaged infrastructure, the internet isn’t always available – and beyond that, computing in a forward-deployed tactical environment will require distributed processing over hardware that has extreme limits on size, weight, power, and time. “A lot of the current capabilities require a link to a server for further processing, or specialized GPUs [graphics processing units],” said Pham. “It just takes a lot of power and a lot of resources to do that.” 

Marines and sailors assigned to Lima Company, 3rd Battalion, 5th Marine Regiment move toward their objective with a Logistics Multipurpose Unmanned Tactical Transport (Log-MUTT) carrying a portion of their gear during their Integrated Training Exercise at the Marine Corps Air-Ground Combat Center, Twentynine Palms, California, in November 2016, a training event within their predeployment training curriculum. The Marine Corps Warfighting Laboratory provided some different unmanned aircraft systems (UAS) and unmanned ground systems (UGS) for the Marines to employ during their training cycle.

Marines and sailors assigned to Lima Company, 3rd Battalion, 5th Marine Regiment move toward their objective with a Logistics Multipurpose Unmanned Tactical Transport (Log-MUTT) carrying a portion of their gear during their Integrated Training Exercise at the Marine Corps Air-Ground Combat Center, Twentynine Palms, California, in November 2016, a training event within their predeployment training curriculum. The Marine Corps Warfighting Laboratory provided some different unmanned aircraft systems (UAS) and unmanned ground systems (UGS) for the Marines to employ during their training cycle.

Dr. Manuel Vindiola, a research scientist at the CCDC ARL, is part of a team that is trying to shrink the resource requirements of AI processing, in part by exploring “neuromorphic” processing architectures that function more like the human brain and its web of spiking neurons, rather than in more traditional linear processes. “We’re looking at a variety of ways to do this kind of processing on low-powered devices,” he said.

Unpredictable, Ungeneralizable, and Unexplainable Conclusions. This is mostly about algorithms that haven’t been written yet, and will involve the evolution of learning techniques, such as reinforcement learning, mentioned by Young. AI systems are designed to deliver specific outcomes based on reams of data, but even then, they sometimes produce puzzling results. More mature AI systems, said Pham, will need to form a generalized response aimed at an objective, based on scant information. “We have to think about how to develop these technologies that can learn and adapt with not a lot of prior knowledge,” he said. 

A robotic teammate processing third-wave AI will achieve predictable outcomes when it encounters unexpected circumstances, because it will be able to generalize from the given context. “If you’ve got to go to the store for milk, and they don’t have milk,” said Young, “well, your goal was to get milk. Your goal is not to go to the store. So you have to then maybe make an alternative plan.” 

Sea Hunter, a new class of unmanned sea surface vehicle developed in partnership between the Office of Naval Research (ONR) and the Defense Advanced Research Projects Agency (DARPA). The Navy’s experimental SURFDEVRON ONE will explore teaming manned and unmanned ships.

Sea Hunter, a new class of unmanned sea surface vehicle developed in partnership between the Office of Naval Research (ONR) and the Defense Advanced Research Projects Agency (DARPA). The Navy’s experimental SURFDEVRON ONE will explore teaming manned and unmanned ships.

In an environment in which an AI system might be asked to identify an enemy target, it will also be important to know the why behind its reasoning. Today’s systems are “black boxes,” taking data in and spitting conclusions out. Explainable AI, a concept being investigated by the Defense Advanced Research Projects Agency (DARPA), will share its thinking with future users. “You want deviations from the expected to be transparent,” Young said, “so that the soldier can say: ‘Okay, I understand what it’s doing. It didn’t see that target. So that’s why it did what it did.’ You don’t want it to be in the case that we’re currently in, which is often: ‘I have no idea why the robot did that.’” Explainable AI would also make it possible for a human teammate to determine the likelihood that a robotic teammate was deceived by an adversary.

A DARPA graphic illustrating the three waves of artificial intelligence (AI). The first wave focused on handcrafted knowledge, in which experts characterized their understanding of a particular area, such as income tax return preparation, as a set of rules. The second wave focused on machine learning, which creates pattern-recognition systems by training on large sets of data. DARPA believes that the next major wave of progress will combine techniques from the first and second waves to create systems that can explain their outputs and apply commonsense reasoning to act as problem-solving partners.

A DARPA graphic illustrating the three waves of artificial intelligence (AI). The first wave focused on handcrafted knowledge, in which experts characterized their understanding of a particular area, such as income tax return preparation, as a set of rules. The second wave focused on machine learning, which creates pattern-recognition systems by training on large sets of data. DARPA believes that the next major wave of progress will combine techniques from the first and second waves to create systems that can explain their outputs and apply commonsense reasoning to act as problem-solving partners.

A trusted robotic teammate would be a huge benefit to soldiers in a tactical unit, Young said, reducing both physical risks and cognitive burdens, as well as the bandwidth required for unit communications. But as Vindiola pointed out, the distance between second-wave and third-wave AI capabilities is vast. “There’s a pretty large gulf there,” he said. “It’s very ambitious and we have a long way to go. It shouldn’t be thought of as the next incremental step. It’s a big step, a leap, compared to what we can do today.”

Nevertheless, military AI researchers such as those at the CCDC ARL are taking the incremental steps necessary to make this leap. Vindiola’s group recently began a new effort to decentralize the task orchestration of a distributed computing network. Tactical units communicate and coordinate efforts among several devices, sensors, people, and agents working together, and the failure of one element can mean the failure of the entire unit. Vindiola’s group has begun to build resilience into simulated situations; when a camera gets knocked out, for example, it can be programmed to alert the rest of the unit, and another device, such as an aerial drone, can automatically step in to monitor until the camera is repaired or replaced. The team will soon begin to explore the addition of machine learning algorithms into this network, perhaps enabling it to automatically reposition or reprogram assets based on objectives.

U.S. Department of Defense Chief Information Officer Dana Deasy and the Director of the Joint Artificial Intelligence Center, U.S. Air Force Lt. Gen. John N.T. Shanahan, hold a roundtable meeting at the Pentagon in Washington, D.C., Feb. 12, 2019.

U.S. Department of Defense Chief Information Officer Dana Deasy and the Director of the Joint Artificial Intelligence Center, U.S. Air Force Lt. Gen. John N.T. Shanahan, hold a roundtable meeting at the Pentagon in Washington, D.C., Feb. 12, 2019.

For the past several years, Young’s team has been working with researchers at Carnegie Mellon University to develop algorithms that will make the conclusions of AI systems sturdier and more reliable, rooted in available context. “We call it semantic classification,” he said. Essentially, it involves restricting conclusions to known facts about the world. For example, Young said, a system designed to detect humans wouldn’t inspire much confidence if it detected three people in the clouds, one in a tree, and three on the ground. “It sounds trivial,” he said, “but you can use other semantic information about the world to improve on that … It’s a low-level capability with a lot of rich opportunities to improve overall performance of a system.”

DARPA, which has been working on the third-wave leap from multiple angles for the past few years, recently threw even more of its weight behind the effort, launching a $2 billion initiative, the AI Next Campaign, in September of 2018. The campaign includes a series of high-risk, high-payoff research projects with the goal of enabling machines, given the right context, to learn from just a few examples and become the kind of systems Young and other military AI researchers envision: adapting to changing situations, identifying patterns not seen before, creatively applying new approaches to solve new problems, and understanding impacts and trade-offs. “We want them to do a better job of understanding the environment,” Young said, “so they can be better teammates.”


This article originally appears in Defense R&D Outlook 2019.

By

Craig Collins is a veteran freelance writer and a regular Faircount Media Group contributor who...