Menu
Blog

Attackers Have the Advantage in the Artificial Intelligence Arms Race

Attackers Have the Advantage in the Artificial Intelligence Arms Race
5 minute read

This article originally appeared in VentureBeat. Phil Tully is speaking on this subject at RSA 2018.

The real-world applications of artificial intelligence (AI) have grown so quickly and become so widespread that it’s hard to get through everyday routines, such as driving or messaging friends, without witnessing their impact. The same holds true in the world of cybersecurity, with both attackers and defenders looking to AI to gain an upper hand. Its rise coincides with the proliferation of data itself, and as we increasingly lean on AI to make sense out of this new data-centric world, we also need to understand its security implications.

The Data Labeling Bottleneck

For decades, defenders have fought off attacks by detecting signatures, or specific patterns indicating malicious activity. This bottom-up approach was reactive; new attacks required new signatures to be deployed, so attackers were always one step ahead in the digital dogfight. Next-gen, AI-based solutions addressed this by taking a top-down approach and feeding large datasets of activity into statistical models. This transition from signatures to statistics meant that defenses could be proactive and generalize better to new attacks.

AI-fueled defense has flourished and is now commonly applied to classic problems like spam filtering, malicious file or URL detection and more. These models typically rely on supervised machine learning algorithms, which map a function from their inputs (e.g. domain names like “http://google.com” or “http://g00glephishpage.com”) to an output (e.g. labels like “benign” or “malicious”). While supervised learning may clearly map to the defender’s need to distinguish benign from malicious, it’s also expensive and time consuming to implement due to its dependence on pre-existing labels. Coveted labels are hard to come by, as they require upfront effort, demand domain expertise and can’t always be repurposed elsewhere, meaning there’s a fundamental “data labeling bottleneck” involved in building effective AI-based defense.

The Structural Dominance of Offensive AI

AI-based defense suffers from other exploitable weaknesses. Because the accuracy of a model is governed by the fidelity of its labels, models can be poisoned when they are trained on datasets that have been injected by purposefully corrupt labels, allowing an attacker to construct specific samples that bypass detection. Other models are systematically vulnerable to slightly perturbed inputs that cause them to produce errors with embarrassingly high confidence. So-called adversarial examples are best illustrated by physical attacks like placing stickers on stop signs to fool the object recognizers used in self-driving cars, and implanting hidden voice commands to fool the speech recognizers used in smart speakers into calling the police.

While these examples might hit close to home for the the average citizen, for cybersecurity professionals, similar errors could mean the difference between a breach and a promotion. Attackers are increasingly turning to automation, and they’ll soon turn to AI themselves to exploit such weaknesses. In short, offensive, or “red team” attackers, can benefit from data just as well as “blue team” defenders.

There’s a growing list of theoretical, AI-based red team workflows around spear phishing, password cracking, captcha subversion, steganography, Tor de-anonymization and antivirus evasion. In each simulation, readily accessible data is leveraged to launch an attack, demonstrating that the data labeling bottleneck makes AI-based attacks easier to pull off than their defensive counterparts.

At first glance, this may seem like history repeating itself. Attackers have always enjoyed an advantage simply because of what’s at stake: blue teams only truly win when detection approaches 100 percent success, whereas red teams win even when they succeed only one time out of 100.

What’s different this time is a broader industry trend that, unfortunately, benefits the red team. One of the reasons so much progress has been made on problems like image recognition is that its researchers are rewarded for collaboration. On the other hand, cybersecurity researchers are often constrained because their data is too sensitive or even illegal to share, or its viewed as intellectual property; the secret sauce that gives vendors a leg-up in the fiercely competitive cybersecurity market. Attackers can exploit this fragmented landscape and lack of data sharing to outpace defense.

Exacerbating this asymmetry, it’s only a matter of time before the barrier to entry for applied AI retreats from PhD dissertations to high school classrooms. Free educational resources, available datasets and pre-trained models, access to powerful cloud-based computing resources like GPUs and open source software libraries all lower the bar for AI newcomers, and therefore would-be attackers. Deep learning itself is actually more user-friendly than older paradigms, and in many cases, produces state-of-the-art accuracy without the expert hand-engineering previously required.

We’re in the Calm Before the Storm, but Nobody Brought Their Raincoat

Given these realities, “a dollar of offense beats a dollar of offense” certainly appears to hold true for the malicious use of AI. As for now, good old fashioned manual attacks still reign and there’s been no credible evidence documenting an AI-based attack in the wild. However, it is in this precise moment that we should consider how to reduce the data labeling bottleneck and lessen the potential for future impacts.

While the odds may be stacked against them, defenders do have tools available to help them reduce the cost and time of labeling data. Crowdsourced labeling services provide a cheap, on-demand workforce whose consensus can approach the accuracy of human experts. Other important tricks of the trade include speeding up the deployment of AI-based defense through:

  • active learning, where comparatively slow human experts label only the most informative data;
  • semi-supervised learning, where models trained on limited labeled data learn the problem structure from available unlabeled data; and
  • transfer learning, where models previously trained for a problem with copious available labeled data are tailored for a new problem with limited labeled data.

Lastly, the best defense is a good offense. If done with care, adversarial samples can be produced that harden AI-based defense, meaning defenders can preemptively wage attacks against their own models to help plug up any holes.

While the data labeling bottleneck gives AI-based attacks a tactical advantage, defenders can and should take steps now to even the playing field before these threats are unleashed.

See ZeroFox in action