National Security-Related Applications of Artificial Intelligence

  • AI

JULY 10, 2018

This report is part of the Center for a New American Security’s series on Artificial Intelligence and International Security. written by Michael Horowitz, Paul Scharre, Gregory C. Allen, Kara Frederick,Anthony Cho and Edoardo Saravalle.

Introduction

There are a number of direct applications of AI relevant for national security purposes, both in the United States and elsewhere. Kevin Kelly notes that in the private sector “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI.”1 There is similarly a broad range of applications for AI in national security. Included below are some examples in cybersecurity, information security, economic and financial tools of statecraft, defense, intelligence, homeland security, diplomacy, and development. This is not intended as a comprehensive list of all possible uses of AI in these fields. Rather, these are merely intended as illustrative examples to help those in the national security community begin to think through some uses of this evolving technology. (The next section covers how broader AI-driven economic and societal changes could affect international security.)

Cybersecurity

The cyber domain represents a prominent potential usage arena for AI, something senior leaders have expressed in recent years. In October 2016, National Security Agency (NSA) Director Michael Rogers stated that the agency sees AI as “foundational to the future of cybersecurity.” Rogers’ remarks occurred only two months after DARPA held its first Cyber Grand Challenge, a head-to-head fight between autonomous machines in cyberspace. Each system was capable of automatically discovering and exploiting cyber vulnerabilities in its opponents while patching its own vulnerabilities and defending itself from external cyberattacks.2 Impressed with the tournament’s results, DoD began a new program, Project Voltron, to develop and deploy autonomous cybersecurity systems to scan and patch vulnerabilities throughout the U.S. military.3

Even as DoD has begun to implement this technology, potential applications of AI for cybersecurity continue to evolve. The systems in the first Cyber Grand Challenge used rule-based programming and did not make significant use of machine learning. Were a similar competition to be held today, machine learning would likely play a much larger role. Below are several illustrative applications of machine learning in the cybersecurity domain that could be especially impactful for the international security environment.

Increased Automation and Reduced Labor Requirements

Cyber surveillance has tended to be less labor-intensive than the traditional human surveillance methods that it has augmented or replaced. The increased use of machine learning could accelerate this trend, potentially putting sophisticated cyber capabilities that would normally require large corporation or nation-state level resources within the reach of smaller organizations or even individuals.4 Already there are countless examples of relatively unsophisticated programmers, so-called “script kiddies,” who are not skilled enough to develop their own cyber-attack programs but can effectively mix, match, and execute code developed by others. Narrow AI will increase the capabilities available to such actors, lowering the bar for attacks by individuals and non-state groups and increasing the scale of potential attacks for all actors.

Using AI to Discover New Cyber Vulnerabilities and Attack Vectors

Researchers at Microsoft5 and Pacific Northwest National Laboratory6 have already demonstrated a technique for using neural networks and generative adversarial networks to automatically produce malicious inputs and determine which inputs are most likely to lead to the discovery of security vulnerabilities. Traditionally, such inputs are tested simply by randomly modifying (aka “fuzzing”) non-malicious inputs, which makes determining those that are most likely to result in new vulnerability discovery inefficient and labor-intensive. The machine learning approach allows the system to learn from prior experience in order to predict which locations in files are most likely to be susceptible to different types of fuzzing mutations, and hence malicious inputs. This approach will be useful in both cyber defense (detecting and protecting) and cyber offense (detecting and exploiting).

Automated Red-teaming and Software Verification and Validation 

While there is understandable attention given to new vulnerability discovery, many cyber attacks exploit older, well-known vulnerabilities that system designers have simply failed to secure. SQL-injection, for example, is a decades-old attack technique to which many new software systems still fall prey. AI technology could be used to develop new verification and validation systems that can automatically test software for known cyber vulnerabilities before the new software is operationally deployed. DARPA has several promising research projects seeking to utilize AI for this function.

Automated Customized Social Engineering Attacks

Many major cybersecurity failures began with “social engineering,” wherein the attacker manipulates a user into compromising their own security. Email phishing to trick users into revealing their passwords is a well-known example. The most effective phishing attacks are human-customized to target the specific victim (aka spear-phishing attacks) – for instance, by impersonating their coworkers, family members, or specific online services that they use. AI technology offers the potential to automate this target customization, matching targeting data to the phishing message and thereby increasing the effectiveness of social engineering attacks.7 Moreover, AI systems with the ability to create realistic, low-cost audio and video forgeries (discussed more below) will expand the phishing attack space from email to other communication domains, such as phone calls and video conferencing.8

Information Security

The role of AI in the shifting threat landscape has serious implications for information security, reflecting the broader impact of AI, through bots and related systems in the information age. AI’s use can both exacerbate and mitigate the effects of disinformation within an evolving information ecosystem. Similar to the role of AI in cyber attacks, AI provides mechanisms to narrowly tailor propaganda to a targeted audience, as well as increase its dissemination at scale – heightening its efficacy and reach. Alternatively, natural language understanding and other forms of machine learning can train computer models to detect and filter propaganda content and its amplifiers. Yet too often the ability to create and spread disinformation outpaces AI-driven tools that detect it.

READ MORE