Defense Media Network

DARPA and the Exploration of Artificial Intelligence

A mere 70 years ago, when early electronic computers ran on vacuum tubes and filled entire rooms, researchers already were striving to enable these machines to think as people do. Only a few years after its start in 1958, DARPA began playing a central role in realizing this ambition by laying some of the groundwork for the field of artificial intelligence (AI). Today, the remarkable pattern-recognition capabilities of deep neural networks, whose circuitry and operation are modeled after those of the brain, have the potential to enable applications ranging from agricultural machines that spot and zap weeds to apps that, with the help of augmented-reality displays on smart phones, enable people to diagnose and solve problems with home appliances.

DARPA is now funding research to enable AI programs to clearly explain the basis of their actions and how they arrive at particular decisions. AI that can explain itself should enable users to trust it, a good thing as increasingly complex AI-driven systems become commonplace. A next challenge for AI researchers will be to emulate human common sense, which is a product of millions of years of human evolution.

DARPA has been on the forefront of establishing the foundations of AI since 1963. J.C.R. Licklider, a program manager in the Information Processing Techniques Office, funded the Project on Machine-Aided Cognition (MAC) at MIT and a similar project at Stanford to research a wide range of AI topics, such as proving mathematical theorems, natural language understanding, robotics, and chess. Early AI researchers focused on chess because it presents a difficult intellectual challenge for humans, yet the rules are simple enough to describe easily in a computer programming language.

In the 1950s and 1960s, computers were automating boring and laborious tasks, like payroll accounting, or solving complex mathematical equations, such as plotting the trajectories of the Apollo missions to the moon. Not surprisingly, AI researchers ignored the boring applications of computers and instead conceived of artificial intelligence as computers solving complex mathematical equations, expressed as algorithms. Algorithms are sets of simple instructions that computers execute in sequence to produce results, such as calculating the trajectory of a lunar lander, when it should fire its retro rockets, and for how long.

DARPA has been on the forefront of establishing the foundations of AI since 1963.

Despite more than a half-century of trying, we have yet to invent an algorithm that enables computers to think the way people do. Early on, AI researchers discovered that intelligence depends not just on thinking, but also on knowledge.

Consider chess. In the middle of the game, each player must ponder around 35 possible moves. For each of these moves, the player’s opponent will have 35 or so countermoves. To determine which move to make, the player must think ahead multiple turns into the future of the game. To think ahead two turns requires consideration of 42,875 moves. To think ahead seven moves would require contemplating 64 billion moves. The IBM Deep Blue supercomputer that beat chess champion Gary Kasparov in 1997 could evaluate 200 million board positions in a second, so looking ahead seven turns would take it a little under six minutes. However, looking ahead nine turns would take it almost two days. Since chess games typically take 50 turns, this brute-force approach of considering all possible moves clearly won’t work.

Chess champions use knowledge of the game to ignore most potential moves that would make no sense to execute. The first AI chess programs used heuristics, or rules of thumb, to decide which moves to spend time considering. In the 1960s, this approach enabled Mac Hack VI, a computer program written by Richard Greenblatt, who was working on Project MAC at MIT, to win against a ranked player in tournament play.

JCR Licklider AI DARPA web

J.C.R. Licklider, the first director of DARPA’s Information Processing Techniques Office, funded the Project on Machine-Aided Cognition at MIT, and a similar effort at Stanford to research a range of artificial intelligence topics. DARPA image

As the centrality of knowledge to intelligence became apparent, AI researchers focused on building so-called expert systems. These programs captured the specialized knowledge of experts in rules that they could then apply to situations of interest to generate useful results. If you’ve ever used a program such as TurboTax to prepare your income tax return, you’ve used an expert system. Edward Shortliffe created one of the first expert systems, MYCIN, for his doctoral dissertation at Stanford University in the early 1970s. MYCIN used a set of around 600 rules to diagnose bacterial infections based on input about symptoms and medical tests. It achieved 69 percent accuracy on a set of test cases, which was on par with human experts. Digital Equipment Corporation used an expert system in the early 1980s to configure its computers. Such early successes led to a boom in AI, with the founding of companies such as Teknowledge, Intellicorp, and Inference Corporation. However, it became apparent that expert systems were difficult to update and maintain, and they would give bizarrely wrong answers when confronted with unusual inputs. The hype around AI gave way to disappointment in the late 1980s. Even the term AI fell out of favor and was superseded by terms such as distributed agents, probabilistic reasoning, and neural networks.

Language is so fundamental to our daily experience of the world that early researchers assumed they could write down all necessary knowledge to enable an AI system. After all, we program computers by writing commands in programming languages. Surely, language is the ideal tool for capturing knowledge. For example, an expert system for reasoning about animals could define a bird as an animal that can fly. However, there are many exceptions to this rule, such as penguins, ostriches, baby birds, dead birds, birds with one or more broken wings, and birds with their feet frozen in pond ice. Exceptions to rules crop up everywhere and expert systems do not handle them gracefully.

By the late 1980s, another approach to AI was gaining momentum. Rather than focus on explicitly writing down knowledge, why not try to create machines that learn the way people do? A robot that could learn from people, observations, and experience should be able to get around in the world, stopping to ask for directions or calling for help when necessary. So-called machine-learning approaches try to extract useful knowledge directly from data about the world. Rather than structuring this knowledge as rules, machine-learning systems apply statistical and probabilistic methods to create generalizations from many data points. The resulting systems are not always correct, but then again, neither are people. Being right most of the time is sufficient for many real-world tasks.

By the late 1980s, another approach to AI was gaining momentum. Rather than focus on explicitly writing down knowledge, why not try to create machines that learn the way people do?

Neural networks are an effective machine-learning method. They emulate the behavior of the brain. The human brain consists of a network of interconnected cells called neurons. Electrical signals flow through this network from sense organs to the brain, and from the brain to the muscles. The human brain has something like 100 billion neurons, each of which connects, on average, to 7,000 other neurons, creating trillions of connections. Signals that travel through this network arrive at neurons and stimulate (or inhibit) them. When the total stimulation exceeds the neuron’s threshold, the cells start firing, an action that propagates the signal to other neurons.

Prev Page 1 2 3 Next Page