Why Deepfakes Are Surfacing as the Newest Security Risk

Why Deepfakes Are Surfacing as the Newest Security Risk
5 minute read

Deepfakes: Unless you’ve been off social media for the last few months, you’ve probably heard this term - or even seen a deepfake video posted online yourself. From viral, entertaining videos of Bill Hader turned Arnold Schwarzenegger to videos of speeches Barack Obama never gave, deepfakes and other AI-based techniques for low-cost creation of fake videos represent a new era of digital threats, where seeing is no longer believing. These videos have become cheaper and easier to make. And with algorithms that favor visual, dynamic content, these types of videos tend to quickly go viral.

The term ‘deepfake’ has been used broadly to describe nearly any type of edited video online - from Nancy Pelosi’s slowed speech to a mash-up of Steve Buscemi and Jennifer Lawrence. But by definition, the term is much more specific. According to TechTarget, “Deepfake is an AI-based technology used to produce or alter video content so that it presents something that didn't, in fact, occur.” Knowing this, the video of Nancy Pelosi does not actually classify as a deepfake but rather simply an altered video, sometimes referred to as “shallow fake” - though it is still problematic. Manipulated, altered, or entirely false videos represent the same core risk: disinformation.

The latest viral deepfake

Just this week we saw another example of the same - a viral, manipulated video, originating on Instagram, spread across social media. But the irony of this particular video was its subject: Mark Zuckerberg, the face (no pun intended) of Facebook and Instagram. Compared to others, this video was not sophisticated: it’s clear that the voiceover is not Zuckerberg’s actual voice but the quality of the video is still convincing enough to fool the untrained eye. In the doctored clip, Zuckerberg appears to boast about his power over billions of people’s data.

See for yourself:

The original post on Instagram is still live and can be seen here. Except, he never said any of that, and this video footage is actually from a 2017 interview about Russian election hacking.

Social and digital platforms grapple with deepfakes

This video, as with others before it, has since spread across social and digital platforms, leaving sites to grapple with what action to take. There are entire Reddit threads, Tumblr accounts, and code sharing forums like GitHub dedicated to sharing deepfakes as well as AI apps designed specifically to help users create their own deepfakes quickly and cheaply. In the past, these platforms have been hesitant to remove manipulated footage and deepfakes and have differed in their approach to handling these videos. In the case of the doctored Nancy Pelosi video, Youtube quickly removed the video while it remained up on Instagram, Facebook and Twitter. That same Steve Buscemi video was shared across Twitter and Youtube, even garnishing a mention from Elon Musk:

As the volume of videos and the risks they pose continue to grow however, social and digital platforms may not have a choice in directly addressing the issue soon. Beyond simple falsehoods, the implications of deepfakes go much further because of what they represent.

In an era of fake news and misinformation, these videos create a disillusionment that even that which can be seen right in front of you can still be false. With the upcoming 2020 elections, the creation of deepfakes could serve as a new method for distributing misinformation, fomenting false influence and targeting individual candidates and parties. While deepfakes have real implications for political discourse globally, it is not just a risk governments have to contend with. As deepfakes become cheap to produce, they pose impersonation-related risks that companies across industries, geographies, and sizes will have to contend with.

Congress’s call to action

The rise in deepfakes and the fear that they pose has driven questions over who should be held responsible for the dissemination of these videos. Is it the video creator? Those who share it? Or the platforms that they are shared on? As of this week, some members of Congress want to hold the social networks responsible for their role in deepfake virality. Just this week, the House of Representatives held its first hearing focused specifically on national security threats posed by deepfake technology. Proposed is a change for Section 230 of the Communications Decency Act to be amended to hold social and digital platforms responsible for the content posted on their sites.

Some states are already taking action against deceptive videos. On June 14, Texas signed State Bill 751 into law, focused specifically on “creating a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election.” In New York, Bill A08155 was introduced on May 31, focused on criminalizing the knowing creation of digital videos, photos and audio of others without their consent.

If Congress chooses to take action, this would represent a major shift for all of the social networks that have previously taken a more passive stance, placing responsibility on the individual poster and sharer to determine whether content is fraudulent or malicious. In the meantime, Facebook remains consistent with the application of their own policy stance, leaving the Zuckerberg video live on Instagram...for now.

Learn about the implications of deepfakes at Black Hat

At Black Hat this year, ZeroFox’s CTO, Mike Price and Principal Research Engineer, Matt Price will be discussing how deepfakes can be leveraged for offensive and defensive purposes. Their session, “Playing Offense and Defense with Deepfakes,” will demonstrate how deepfakes are built including a fine grained, step-by-step breakdown of creation, including the details of all deep learning models used in the process. A test deepfake video will be created. A malicious deepfake video will then be created, in which an international politician is impersonated. This video will then be temporarily circulated in the wild, as a means for measuring impact.

The offense section of the presentation is followed by a defense section. This section will provide an overview of contemporary techniques for detecting deepfake videos. A novel approach to detecting deepfake videos will be introduced. Finally, a tool for offensive and defensive research will be announced and released at the time of the presentation.

Attending Black Hat? Learn more about the session here.

Tags: Deepfakes

See ZeroFox in action