Defense Media Network

The Intertwined History of DARPA and Moore’s Law

In 1965, the legendary technology pioneer Gordon Moore set us on a 50-year odyssey so consequential that it is defensible to think of our times as the “Microelectronics Age.” In a short paper published that year in Electronics magazine titled “Cramming more components onto integrated circuits,” Moore predicted a trajectory of progress in which the transistor count of integrated circuits would roughly double every two years while the cost per transistor would decrease. When the three-page paper was first published in this niche trade magazine, early readers couldn’t possibly have imagined the impact it would have on the electronics industry. However, from these humble beginnings emerged the line of progress that we today know as Moore’s Law.

Moore did not likely predict that he would set a course for the investment of hundreds of millions of federal research dollars, and even more from industry. He, along with many leaders in the field from government and industry, predicted this line of progress’ demise. However, through ingenuity, funding, and partnerships, the prophecy continues to be fulfilled.

DARPA, like so many other research institutions, has turned to Moore’s Law as a means of charting a continued path forward in electronics innovation. For decades, the agency has invested heavily in the advancement of electronics, yielding many industry-changing technologies, while fulfilling Moore’s prophecy.

Since its inception, DARPA often has relied on an open research model that involves pairing with non-defense-oriented partners. Rather than relying on secrecy, which often is required in military research, the investments the agency has made in the fundamentals of semiconductors, as a result of this inclusive and collaborative model, have allowed the country to take the lead in pioneering this technology. We have helped build communities that allow ideas to be rigorously developed, and then perfected and manufactured by industry, generating advancements that have brought both economic and defense gains. Correct navigation of Moore’s Law has been a defining factor for our position of global leadership.

One of DARPA’s earliest investments in the advancement of integrated circuit technology was an ambitious effort called the Very Large Scale Integrated Circuits (VLSI) program. During the 1970s and 1980s, VLSI brought together entire research communities to create significant advances in computer architecture and system design, microelectronics fabrication, and the overall cycle of design fabrication, testing, and evaluation. These R&D commitments helped overcome early barriers to the transistor-scaling trends that Moore articulated. The progress achieved under VLSI helped propel the field of computing, furthering U.S. military capabilities and enhancing national security, all the while helping to usher in a new era of commercial applications1.

Among the resulting technologies from the VLSI program were Reduced Instruction Set Computing (RISC) processors, which have provided the computational power undergirding everything from supercomputers and the NASA Mars Pathfinder to today’s cellphones and mobile devices2.Because of the development of RISC processors, the performance of graphics hardware grew 55 percent per year, essentially achieving a doubling in performance every 18 months3. Although Moore’s observation solely described the inverse relationship between an increasing number of transistors and cost, performance improvements quickly became synonymous with transistor scaling and a prime motivator for continued scaling.

The VLSI program underscored the need for continued collaboration across the U.S. electronics community as well as the role DARPA could play in opening doors for further innovation. To help foster the pursuit of new chip designs, DARPA established the Metal Oxide Silicon Implementation Service (MOSIS) in January 1981. MOSIS provided a fast-turnaround, low-cost capability to fabricate limited batches of custom and semi-custom microelectronic devices. The service opened opportunities to researchers who otherwise would not have had direct access to microelectronics fabrication facilities. Over the course of its more than 35-year run, MOSIS fostered a steady pace of innovation in microelectronics design and manufacturing.

By 1992, the United States was responsible for 82 percent of the semiconductor production yields, which is in part attributable to this cross-community effort.

While the U.S. accelerated the pace of microelectronics innovation throughout the 1970s and early 1980s, Japan took the lead in advanced semiconductor production and manufacturing toward the end of the 1980s. To regain dominance, the Semiconductor Manufacturing Technology (SEMATECH) consortium was founded with support and funding from DARPA and the U.S. semiconductor industry. Throughout the decades that followed, the consortium fostered stronger community engagement among manufacturers and suppliers, and significantly enhanced R&D of next-generation production tools and equipment. By 1992, the United States was responsible for 82 percent of the semiconductor production yields, which is in part attributable to this cross-community effort4.

During the late 1980s and early 1990s, new and evolving military and commercial applications, including advanced weapons systems, networks, and the Global Positioning System (GPS), continued to drive the need for powerful, low-cost microelectronics. The persistent transistor scaling needed to make this happen, of course, required that innovation in semiconductor materials, device integration schemes, and other technical areas continue unabated.

During this time, DARPA funded a program that ushered in the state of the art in semiconductor lithography. Working with academia and industry, the program advanced the development of new lens materials and photoresists capable of pushing past technical barriers that had previously limited the technology to 248-nanometer (nm) lithography and of supporting new-generation technology produced with 193-nm lithography. These advances in miniaturization and circuit density had a dramatic effect on the semiconductor industry. The new lithography capabilities quickly became mainstream and industry players used it for advanced commercial and military microelectronics.

Building on the exploration of new materials and integration schemes from the early 1990s, DARPA launched a program to develop transistors beyond the 25-nm-size threshold in 1995. The research efforts completed under the program led to FinFETs (Fin Field Effect Transistors) based on a novel 3-D transistor design that leverages protruding fin-like silicon structures, which allow multiple gates to operate on a single transistor. Today, leading chipmakers continue to use FinFET technology to scale transistors down to 7 nm.

While Moore’s predictions helped chart the course for transistor scaling over the past 50 years, it was the ingenuity and dedication of industry, academia, and government organizations, like DARPA, that brought Moore’s Law to life. DARPA’s investments have helped industry and the Department of Defense (DOD) overcome the barriers of traditional transistor scaling through the discovery of new materials that exceed current limitations and can attain future performance and efficiency requirements. This has only been possible by fostering an environment for collaboration and innovation around novel design schemes and architectures, and by opening pathways for experimentation within the manufacturing and production of microelectronics.

It is because of the intertwined history of commercial and defense support for the semiconductor industry through programs like VLSI, MOSIS, and SEMATECH, that the United States has enjoyed the distinct advantage of global leadership in microelectronics innovation. This has resulted in consumer electronics having benefited from components with heritage in the DOD, such as GPS, as well as military systems that are leveraging the processing power of leading-edge commercial processors alongside purpose-built integrated circuits.

 

Looking Back to Look Forward: Moore’s Inflection

The U.S. semiconductor industry plays a uniquely outsized role to the U.S. economy, substantially contributing more than any other major domestic manufacturing sector to the economy5. Over the last 30 years, growth in the semiconductor industry has increased rapidly, outpacing the U.S. GDP growth rate by more than a factor of six6.

Based on his observations 50 years ago, Moore accurately predicted the point we are reaching today. In honor of Moore’s ongoing presence in electronics, we at DARPA refer to this point as “Moore’s Inflection” – a point where the priorities we set today will determine whether the state of the electronics ecosystem becomes stagnant, rigid, and traditional, or grows to be dynamic, flexible, and innovative.

Not all good things can go on forever, however. Today, semiconductor technology continues to progress according to Moore’s Law, but that march forward is showing signs of slowing down. In addition to the fundamental technological limits that apply as the size of devices continues to shrink, unintended consequences associated with the economics of continuing down this path are surfacing. Increasing circuit complexity and the associated development costs have kept many commercial and government organizations from participating at the cutting edge of electronics R&D. Today, U.S. electronics development and manufacturing is facing a trio of challenges that threaten the future health of the industry, as well as our military capabilities:

The cost of integrated circuit design is skyrocketing, which is limiting innovation. Only large, global, multinational entities backed by massive commercial demand can innovate and compete in today’s electronics landscape. This severely limits the complexity of circuits that cash-strapped startups and DOD designers can produce.

Foreign investment is distorting the market and driving a shift outside of the United States. China’s plan to invest $150 billion into developing its manufacturing capabilities is luring foreign interest. Even by 2015, China already had begun building 26 new 300-mm semiconductor foundries7 and had launched 1,300 fabless startups8. These global, economic forces are placing a premium on transformative semiconductor invention to stay ahead.

The continued move toward generalization and abstraction is stifling potential gains in hardware. The rising cost of managing the complexity of a modern electronics system – from manufacturing and designing circuits to programming – has led to increased layers of abstraction. The numerous steps from the invention in the bottom of the stack (for example in new materials) to the money-making portion higher up the computing stack leaves a reluctance to invest significantly. Coupled with the predictable benefits of continued transistor scaling, this has created an ecosystem where only generalized electronic hardware can be economically successful, and much of the value has moved closer to the application higher up the software stack. As a result, hardware has become closer to a commodity, reserving much of the potential gains in performance from specialized hardware for only select situations.

Moore's Inflection

“Moore’s Inflection” – a point marked by arrows on the diagram where priorities set today will determine whether advances in electronics will begin to slow and stagnate or where new innovations will catalyze another long run of dynamic and flexible technological progress. DARPA image

At a time such as this, it is instructive to go back to the origins of the industry and look to the leaders of the field for clues on how to move forward. Even while setting the course in 1965, Moore himself foresaw the end of scaling. In his seminal paper in which he conveyed the projection we know as Moore’s Law, Moore predicted that economic limitations, in addition to technical and engineering challenges, could eventually become an impediment for scaling. Equally important, on the third page of his article, he predicted that progress in areas that today we know as design automation, materials science, packaging, and architecture specialization could keep the pathway open for increasingly capable electronics.

Based on his observations 50 years ago, Moore accurately predicted the point we are reaching today. In honor of Moore’s ongoing presence in electronics, we at DARPA refer to this point as “Moore’s Inflection” – a point where the priorities we set today will determine whether the state of the electronics ecosystem becomes stagnant, rigid, and traditional, or grows to be dynamic, flexible, and innovative.

 

The Electronics Resurgence Initiative: A Response to Moore’s Inflection

As Moore’s Inflection approaches, the U.S. government has decided to take large-scale action by investing some $1.5 billion over the next five years in the DARPA-led Electronics Resurgence Initiative (ERI). ERI seeks to build a specialized, secure, and heavily automated innovation cycle that will enable the U.S. electronics community to move from an era of generalized hardware to specialized systems.

Building on DARPA’s legacy of electronics invention, ERI aims to foster forward-looking collaborations and novel approaches to usher in this new era of circuit specialization. The large-scale initiative will apply DARPA’s open research model to the future of microelectronics and bring together government, academia, industry, the defense industrial base, and the DOD to create the environment needed for continued and dramatic advancement.

In deference to the guidance provided on page three of Moore’s 1965 paper, ERI seeks to create an ecosystem where smarter design automation tools will be able to directly take logic diagrams and turn them into physical chips without requiring any special engineering intervention in between. This would make it economically feasible to produce small batches of custom circuits – or accelerator cores – designed for specific functions rather than only producing large volumes of general circuits. The ability to construct and interconnect arrays of custom circuits to form larger systems would enable the rapid and highly efficient creation of a considerable variety of unique electronic products.

ERI is comprised of several DARPA programs – many of which kicked off after the official announcement of the initiative in June 2017 – that focus on three primary research thrusts: architectures, materials and integration, and design. Teams in the design thrust seek to develop an open framework that enables researchers and design teams to apply machine-learning algorithms that can rapidly and automatically translate high-level functions and requirements into physical layouts of custom circuits. To ensure that a variety of custom circuits, materials, and device technologies can be used together to build larger systems, the materials and integration thrust will investigate new interconnect standards and the integration of novel memory and logic circuits. Lastly, the architectures thrust will explore circuit-level coordination and hardware/software co-design methodologies to help create modular and flexible systems able to adapt and optimize combinations of new devices and accelerator cores into systems tailored for any application.

 

Design

“Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.” – Gordon Moore, 1965

 

Although Moore could not have predicted the extent to which his observations on transistor scaling would be stretched, he did understand how the increase in the number of transistors would eventually create circuits too complex for designers to lay out by hand and that automation tools would need to be developed. At the time when Moore published his observations, integrated circuits had around 50 transistors; today, that number is around 21 billion9. The electronics community began developing electronic design automation (EDA) tools to help automate the process as the number of transistors continued to increase. As powerful as these tools are at helping designers manage the complexity of laying out billions of transistors, they have not kept pace with physical manufacturing capabilities and the rise of analog circuits, which are still manually designed. As a result, the size of design teams has exploded, and the need for specialized technical expertise has never been greater.

The development of modular design methodologies has helped mitigate some of the limitations of EDA. One technique designers employ is to capture frequently used circuit functions into discrete, modular blocks, called intellectual property (IP) blocks, which can be used and reused to create larger, more complex systems. For comparison, in 2000, more than 90 percent of a chip had to be specially designed. Today, that number has reversed with designers reusing already-designed IP blocks for more than 90 percent of a chip10.

Even with the growing use of IP blocks, however, the rapid rise in the cost to design and verify new hardware has made access to leading-edge electronics prohibitively expensive to all but the largest companies. The Circuit Realization at Faster Timescales (CRAFT) program was conceived to explore solutions to this problem through the use of automated generators to rapidly create new circuits and accelerate the design cycle. Recently, researchers in the CRAFT program demonstrated a design flow that leveraged automated generators to produce digital circuits seven times faster than that achieved by traditional methods. Put in another way, these tools enabled small design teams to be just as productive as teams seven times their size.

Maintaining continued forward momentum beyond the imminent Moore’s Inflection will require pushing the limits of machine learning to extend automation into every aspect of circuit design. Two new programs in the ERI Design thrust, inspired by Moore’s prescience, aim to explore machine-centric hardware design flows that can support the physical layout generation of complex electronic circuits with “no human in the loop” and in less than 24 hours. To facilitate the reliable reuse of circuit blocks and to engage the collective brain power of the open-source design community, these efforts will seek to leverage new simulation technologies and applied machine learning to verify and emulate circuit blocks. With enhanced design automation tools like these, the barrier to entry for a growing number of innovators will shrink and thereby unleash an era of unprecedented specialization and capability in electronics technologies.

 

Materials and Integration

“… build large systems out of smaller functions, which are separately packaged and interconnected.” – Gordon Moore, 1965

 

A central challenge to managing modularity is how to properly interconnect the growing number of functional blocks without affecting performance. Since 2000, not only has the number of transistors per chip grown from 42 million to 21 billion11, but the number of IP blocks on that same chip has increased more than 10 times12 as well. In addition, these functional blocks are increasingly becoming a mixture of digital and analog circuits, and are often made from vastly different materials, further complicating the challenge of integration.

To realize Moore’s vision of building larger functions out of smaller functional blocks, we need to find new ways for various dissimilar blocks to connect and communicate with each other.

Moore’s predictions regarding materials and integration are already being realized in DARPA’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program. This research effort seeks to develop modular chip designs that can be rapidly assembled and reconfigured through the integration of a variety of IP blocks in the form of prefabricated chiplets. These chiplets will leverage standard layouts and interfaces to easily link together. The program recently announced that Intel® will be contributing their proprietary interface and the relevant IP associated with it to the CHIPS program to be used as the program’s standard interface. Intel’s direct participation will help ensure that all the various IP blocks in the program can be seamlessly connected together. This is a huge step toward the creation of a national interconnect standard that will enable the rapid assembly of large modular systems.

CHIPS 1

The CHIPS program is pushing for a new microsystem architecture based on the mixing and matching of small, single-function chiplets into chip-sized systems as capable as an entire printed circuit board’s worth of chips and components. DARPA image

What is often left out of the story behind the growing number of transistors is the parallel rise of the number of interconnects required to shuttle data back and forth across the chip. The explosion of wires have not only complicated the design process but have also created longer and more convoluted paths for data to travel through. To get a sense of scale, if all the wires in a modern chip were laid out end to end, they would span more than 21 miles. For most computing architectures, which separate the central processing unit (CPU) and the memory, moving data across this growing tangle of wires severely limits computational performance. The conundrum even has its own name, the “memory bottleneck.” For instance, to execute a machine-learning algorithm on a leading-edge chip, more than 92 percent of the execution time is spent waiting to access memory.

With the vast number of circuit combinations made possible by new standard interfaces and the performance limitations of current interconnects, we must ask the question: What role can new materials and radically new architectures play in addressing these challenges? In response to this question, one of the new programs under the ERI Materials and Integration thrust plans to explore the use of vertical, rather than planar, integration of microsystem components. By leveraging the third dimension, designers can dramatically reduce wire lengths between essential components such as the CPU and memory. Simulations show that 3-D chips fabricated using older 90-nm nodes can perform 50 times better than circuits fabricated on 7-nm nodes using planar integration. Furthermore, another program will investigate new materials coupled with architectures that rethink the flow of data between processors and memory to provide new solutions for processing the growing volume of scientific, sensor, social, environmental, and many other kinds of data.

 

Architectures

“The availability of large functions, combined with functional design and construction, should allow the manufacturer of large systems to design and construct a considerable variety of equipment both rapidly and economically.” – Gordon Moore, 1965

The relentless pace of Moore’s Law ensured that the general-purpose computer would be the dominant architecture for the last 50 years. When compared to performance gains achieved under Moore’s Law, exploring new computer architectures and committing the years of development and hundreds of millions of dollars required to do so just did not make economic sense. As this trend starts to slow down, however, it is becoming harder to squeeze performance out of generalized hardware, setting the stage for a resurgence in specialized architectures.

Imagining what the future would look like, Moore suggested a framework for delivering specialized architectures by focusing on “functional design and construction” that would lead to manufacturable systems that also make economic sense. In other words, he was envisioning flexible architectures that can take advantage of specialized hardware to solve specific computing problems faster and more efficiently.

Last year, DARPA started the Hierarchical Identify Verify Exploit (HIVE) program to explore the optimization of a specialized integrated circuit that could analyze the various relationships between data points in large-scale datasets, such as social media, sensor feeds, and scientific studies. Working with industry partners such as Qualcomm and Intel, the HIVE program aims to develop a specialized integrated circuit capable of processing large-scale data analytics 1,000 times faster than current processing technology. This advanced hardware could have the power to analyze the billion- and trillion-edge datasets that will be generated by the Internet of Things, ever-expanding social networks, and future sensor systems.

While HIVE is an example of current progress, it will take much more innovation to bring Moore’s vision of specialized hardware to fruition. One of the key challenges to employing more specialization is the tension between the flexibility of general-purpose processors and the efficiency of specialized processors. If designers find specialized hardware too difficult to use or program, they are likely to forgo the efficiencies the hardware could deliver.

The two new ERI Architectures programs seek to demonstrate that the trade-off between flexibility and efficiency need not be binary. These programs seek to develop methods for determining the right amount and type of specialization while making a system as programmable and flexible as possible.

One of the programs will investigate reconfigurable computing architectures and software environments that together enable data-intensive application performance near that of single application specialized processing implementations without sacrificing versatility or programmability. The resulting capabilities will enable the real-time optimization of computational resources based on real-time introspection of incoming data. The program will realize processing performance 500-1,000 times better than state-of-the-art, general-purpose processing and provide application-specialized processing performance while maintaining flexibility and programmability.

The second program under the Architectures pillar of ERI will explore methods for combining a massive number of accelerator cores. Although accelerator cores can perform specific functions faster and more efficiently than is possible in software running on a general-purpose processor, programming and coordinating applications on many heterogeneous cores has been a big challenge. One solution is to take a vertical view of the computing stack, which cross-cuts from the application software to the operating system and all the way down to the underlying hardware. By exploring the concept of a domain-driven approach to identify the appropriate accelerators; working on better languages and compilers to optimize code for these accelerators; and implementing intelligent scheduling for the applications running on such a complex processor, this program is looking at a new concept in customized chips that can rapidly utilize myriad accelerators to address multiple applications.

 

Toward a More Innovative Future

The gains that came from Moore’s Law were not guaranteed, but realized through ingenuity and close collaboration between commercial industry, academia, and government. Today, the rising cost to design integrated circuits, increasing foreign investments, and the commodification of hardware threaten the future health of an innovative and dynamic domestic microelectronics community. Facing these challenges, the Electronics Resurgence Initiative will build on the long tradition of successful government-industry partnerships to foster the environment needed for the next wave of U.S. semiconductor innovation.


This article was originally published in DARPA: Defense Advanced Research Projects Agency 1958-2018 By Faircount, LLC


  1. National Research Council. 1999. Funding a Revolution: Government Support for Computing Research. Washington, DC: The National Academies Press. https://doi.org/10.17226/6323.
  2. National Research Council. 1999. Funding a Revolution: Government Support for Computing Research. Washington, DC: The National Academies Press. https://doi.org/10.17226/6323.
  3. IBM. RISC Architecture. http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/risc/
  4. National Research Council. 1999. Funding a Revolution: Government Support for Computing Research. Washington, DC: The National Academies Press. https://doi.org/10.17226/6323.
  5. SIA, “The U.S. Semiconductor Industry: A Key Contributor to U.S. Economic Growth,” 2014.
  6. McKinsey industry analysis
  7. EE Times, “China Fab Boom Fuels Equipment Spending Revival,” March, 2017. https://www.eetimes.com/document.asp?doc_id=1331492
  8. EE Times, “Much Ado About China’s Big IC Surge,” June, 2017. https://www.eetimes.com/document.asp?doc_id=1331928
  9. PC World, “Nvidia’s monstrous Volta GPU appears, packed with 21 billion transistors and 5,120 cores,” May, 2017, https://www.pcworld.com/article/3196026/components-graphics/nvidias-monstrous-volta-gpu-appears-packed-with-21-billion-transistors-and-5120-cores.html
  10. SEMICO Research Corporation, 2014
  11. The Economist, “Technology Quarterly: After Moore’s Law,” https://www.economist.com/technology-quarterly/2016-03-12/after-moores-law
  12. SEMICO Research Corporation, 2014