With a political election on the horizon in the United States and elections just now wrapping up across Europe, people flock to social media and online forums to show support, debate, and find out more about specific candidates. And for good reason. Social networks can be great tools for real-time news and alerts in a syndicated, concise format. But with the speed and virality at which content is posted and shared on social sites, it is difficult for the average user to decipher legitimate news from misinformation, which unfortunately has become inevitable on these platforms.
The Implications of Deepfakes
As the content posted to social networks and forums has become increasingly dynamic and visual, we have entered a new era of misinformation where images, videos and audio streams can be altered to trick an untrained eye. In recent months digitally manipulated videos, specifically referred to as “deepfakes,” have risen in popularity.
Deepfakes by definition are founded in AI-based technology, which is used to produce or alter video and/or audio content so that it presents something that didn’t, in fact, occur. Whether coupled with audio manipulation, which has been demonstrated in research projects, our used without audio, deepfake videos can serve to confuse viewers into believing that events occurred that never happened, that politicians gave a speech they never gave, or that high-profile executives said things they never said. The implications of these kinds of videos are obvious: they can weaken the viewers’ trust in the person, event, political campaign or brand they target. Worse, they can may mislead viewers into false conclusions – favorable or unfavorable – about a politician’s views, a company’s direction, or an executive’s commitment to a cause.
Deepfakes represent real risks – the consequences of a fake merger announcement, an ethical transgression, a racist expression, etc. can virally explode before the falsehood is identified. One incident can harm a company’s valuation, brand, and goodwill in a heartbeat or sabotage a political candidate’s good name. With such low barriers to entry to create, the number of deepfake videos will likely continue to scale to the point that human detection is simply not sufficient. Thus, effective automated detection capabilities must emerge and improve rapidly.
Deepfakes at Black Hat
At Black Hat this year, ZeroFox’s CTO, Mike Price and Principal Research Engineer, Matt Price are discussing deepfakes, the risks they pose, and the state of detection solutions. In the session, they review public examples of the problem, including videos of well-known political and public figures that have been manipulated, and discuss the ease of development and delivery of these videos. They will introduce detection solutions to aid in addressing the risks deepfakes pose. Finally, they are announcing the release of an open source tool called Deepstar, designed to aid in research related to deepfake detection algorithms.
Introducing Deepstar – ZeroFox’s Open Source Contribution to Deepfake Detection
A new open source toolkit, Deepstar, was developed by ZeroFox based on research into deepfake videos and the difficulty in quickly developing and enhancing detection capabilities. We believe it will help ZeroFox and others in the community build, test, and enhance techniques for detecting deepfakes.
The toolkit includes code for aiding in the creation of deepfake datasets, testing, and enhancement of detection algorithms, along with a curated library of deepfake and real videos from YouTube. The toolkit effectively automates some of the labor intensive tasks required — for example it is able to grab video from content sites, grab frames from video required to train or retrain deep-learning classifiers, execute the required transforms on those frames such as face extraction, and automate the testing and scoring of new detection models, to name a few capabilities.
The toolkit incorporates a plug-in framework, enabling researchers to easily test or re-train and compare the performance of different classifiers. ZeroFox is also contributing a deep learning-classifier we created that focuses detection around the mouth of the subject in the video along with an implementation of MesoNet, another open source detection algorithm.
Our goal in sharing the Deepstar toolkit and library is to help defenders get ahead of this problem. Our hope is that the larger community will create and add new plug-ins and contribute to the video library so that the community can continue to elevate the state of our defenses.
On YouTube alone, over 300 hours of video are uploaded every minute which makes human analysis nearly impossible. Tooling, such as Deepstar, is a valuable asset for tackling the emerging threat posed by deepfakes and enhancing the accuracy and scale at which these detection capabilities operate. We sincerely hope the community will come together around this project and help continuously enhance deepfake detection.
Interested in exploring the toolkit yourself? Learn more here.
Advancing ZeroFox’s AI Capabilities with Video Analysis
On the heels of the Deepstar release and new object detection of weapons and credit cards, ZeroFox will be introducing new video analysis capabilities to detect deepfakes within the ZeroFox Platform. Expanding current text and image analysis tools, these new capabilities will analyze videos for specific threats to ZeroFox customers, triggering an alert when a threat is detected. ZeroFox customers interested in adding video analysis to their ZeroFox instance can reach out to their account manager for more information.
See us at Black Hat
If you’re at Black Hat, stop by ZeroFox booth #254 to see the full suite of platform capabilities, including detecting threatening objects such as credit cards and weapons, OCR and text analysis, phishing detection and more.