How to Create a Deepfake Incident Response Plan: A Practical Framework for Security Teams
by ZeroFox Team

Deepfake technology has turned what was once science fiction into just another Tuesday morning at the office.
Thomas Hoskin, Director of Product Management at ZeroFox, gives an example: "You can no longer join a Zoom call and 100% trust that everyone you are speaking to is who they seem to be, just because you see their face and hear their voice."
Arup, a global design and engineering firm known for iconic structures like the Sydney Opera House and the Pompidou Centre, learned this the hard way when their chief financial officer appeared to order substantial sums to be transferred during a video conference. Seeing and hearing the CFO speak along with the virtual presence of other managers convinced the targeted employee everything was normal. But the CFO had never made the call and the other members of staff weren’t real either, the whole event was all a deepfake. Arup lost around $25 million to the scammers.
The emergence of convincing deepfakes like this has caught everyone off guard following earlier skepticism:
"A lot of people laughed at it, thinking things like, 'It’s got six fingers! It's only going to be people who are naive or aren't really on the alert who are going to be fooled by this stuff,'” Hoskin recalls.
“But very quickly, over the course of 12 or 18 months, this has gone from generating images and video with obvious flaws, to producing content that is quite subtle."
Unfortunately, that means the experience of Arup is no longer a rare one. In the past year, roughly one in two companies were hit by deepfake fraud attacks, which cost businesses almost $450,000 on average. The total value of financial losses associated with deepfake technologies have reached $1.56 billion, with more than $1 billion of that figure occurring in 2025 alone.
The technology has grown so sophisticated that just 0.1% of consumers in the U.S. and U.K. are now able to correctly tell the difference between authentic and fake content.
Deepfake videos are even more persuasive than images, viewers are 36% less successful in working out which videos are real and which are synthetic compared to images.
And although fears about deepfakes are growing, many people remain uninformed about the technology. One in five (22%) report no prior knowledge of deepfakes, including 30% of individuals aged 55-64 and 39% of those over 65, highlighting the increased vulnerability of older demographics in the face of this emerging danger.
At the other end of the spectrum, people vastly over-estimate their ability to identify deepfakes, with over 60% believing they could tell the difference. This overconfidence was especially prominent among young adults (18-34), pointing to an illusion of security almost as dangerous as ignorance.
Nico Alvear, AI Product Manager at ZeroFox, warns how deepfakes have fundamentally changed the online landscape:
“The tools have improved so much, and the barriers to creating content like this are low, the technology has spread into different categories of deepfakes,” he says.
“At first it was just funny stuff like YouTube parodies. Now you have deepfakes that are extremely well done, and aimed to do harm. "
As they have evolved from harmless entertainment into one of the most pressing cybersecurity challenges organizations face today, Hoskin explains that every organization should assume they will be the target of a deepfake attack at some point:
"Ultimately, any company generating a reasonable amount of revenue is a target for these threat actors. Attacks are becoming more indiscriminate and as it becomes cheaper and cheaper to produce more sophisticated videos, we are only seeing the number of threats increase,” he says.
"If you have customers or business partners that trust you, and are giving you money or interacting with you, that is enough to be a target for a deepfake threat actor."
The good news is that organizations can build a comprehensive deepfake response plan with fast detection, rapid remediation, and continuous improvement to defend themselves as the threats proliferate.
But before we explore the ins and outs of creating an effective deepfake incident response plan, let’s go over some of the essential information we need to understand the phenomenon better.
What are Deepfakes?
Deepfakes are created by machine learning algorithms that study real content, then generate synthetic media, from text and images to audio and video, that appears remarkably authentic.
Common Deepfake Attack Vectors
Deepfakes can co-opt the likenesses of your company’s brands and personnel in a variety of ways. Each method takes advantage of different weaknesses in how people think and how organizations operate, which means they need specific strategies to defend against them. Here are the most common threats being used against organizations right now:
Five Types of Deepfake Attacks:
1. Video Deepfakes: Complete face replacements in video content. Attackers use these to impersonate executives in recorded messages or live calls.
2. Audio Cloning: Synthetic voice generation from brief audio samples. Fraudsters use cloned voices for phone-based social engineering.
3. Image Manipulation: AI-generated photos for document forgery, identity theft, or creating fake profiles.
4. Synthetic Text: AI-written content that mimics specific writing styles for targeted phishing campaigns.
5. Virtual Assistants: Chatbots and voice assistants programmed to deceive through natural language processing.
Recent real-life examples show how different deepfake attacks can slip past standard security systems and take advantage of people's trust. Here are some incidents that demonstrate today's deepfake tactics in action:
- Executive Impersonation: Among countless examples, a recent one involved a deepfake of Nvidia CEO Jensen Huang promoting a crypto scam, reaching nearly 100,000 viewers, eight times more than the genuine Nvidia livestream. The deepfake included a QR code encouraging cryptocurrency donations and at this time, the number of victims who were scammed is unknown.
- Brand Exploitation: Likenesses of Tesla CEO Elon Musk make frequent appearances in deepfake scams. But the Tesla brand itself was targeted in a viral video that falsely claimed Donald Trump had announced a US ban on Tesla production. The clip was edited using AI to combine real footage with manipulated audio and visuals, making it appear authentic.
- Recruitment Fraud: An emerging threat sees North Korean hackers using deepfakes to land jobs at multinational companies. This scam bypasses international sanctions against the country, provides it with much-needed revenue streams, and also allows the phony employees to steal information or plant malware.
- Customer Targeting: Deepfake technology is also fueling highly personalized social engineering attacks on customers, where AI-generated video or audio mimics company employees or the account holders themselves. This enables fraudsters to target victims via phone calls or video conferences with offers or requests that appear authentic, causing customers to transfer funds or disclose credentials. Hoskin warns why this threat is so insidious: "The interaction happens between customer and threat actor, making it invisible to the company unless they're monitoring for these types of attacks."
Defending Against Deepfakes: Preparation and Prevention
Understanding deepfake attack patterns is just the first step. Now that we've seen how scammers easily exploit trust to penetrate organizations, it's time to turn knowledge into action. You can't protect what you don’t know about, so creating a robust defense starts with taking an honest look at your organization and mapping out specific threats. Here’s how it works:
Risk Assessment
Begin by identifying the high-value targets within your organization, from executives, and finance teams, to customer-service departments. These are the faces and voices attackers will likely impersonate.
You should also estimate your potential financial exposure and the operational disruption caused by unauthorized transfers, along with the reputational damage resulting from fake announcements.
Deepfake incidents may trigger reporting obligations under regulations like GDPR, SOX, or HIPAA. So, it’s advisable to map your compliance requirements and understand your responsibilities before any crisis occurs.
Technical Defenses
It’s relatively quick and easy to create deepfakes, which means they are deployed far and wide across the internet in ever-increasing volumes.
The following technological solutions can sometimes be useful for dealing with the enormous scale and speed of the threat:
- AI Detection Tools: Some detection systems can scan video, audio, and images for signs of manipulation. But their effectiveness is limited, and remember that today's detector becomes tomorrow's training data for more sophisticated fakes.
- SIEM Integration: Connect deepfake detection to your security information and event management systems to track patterns across multiple incidents and identify campaigns.
- Content Verification: Implement content integrity gateways for video conferences. Scan participants before meetings begin. Extend zero-trust principles beyond networks to include media verification.
- Voice Channel Hardening: Embed cryptographic signatures in legitimate voice traffic. Create watermarks that prove authenticity without disrupting communication.
These detection and verification systems are one layer of defense, but you should understand their limitations. Alvear warns that identifying content that has been synthetically manipulated is “not the full story”.
“The focus must be on the intent of the video, because the deepfake arms race is unwinnable,” he says.
“Even if you have an effective deepfake detector today, it will be used to train the next generation of deepfake generators to produce undetectable content."
Procedural Safeguards
While deepfakes are created with advanced machine learning, in the end, they’re still just tools used as part of wider social engineering attacks. Technological solutions alone won't save you from exploitation. Your defense must be backed up with robust processes to verify unusual requests. These include:
- Multi-Channel Verification: Never authorize high-risk actions based on a single communication channel. If someone requests a wire transfer, contact them for confirmation via another method. For example, if they send a video message, confirm with a voice call to a known number or registered email address.
- Callback Protocols: Establish mandatory callbacks for financial transactions, credential changes, or policy modifications. Use pre-arranged code words that change regularly. Institute a "code of the day" for sensitive operations.
- Separation of Duties: Require two or more employees to authorize financial transactions. Make it impossible for a single deepfake to compromise critical operations.
- Escalation Paths: Speed matters more than certainty in the early stages of an incident, so create clear escalation paths that ensures every employee knows whom to contact when they suspect a deepfake.
- Role-specific requirements and responses: For example, executives need to know how to avoid becoming deepfake subjects and what to do if it happens. Finance teams must have special protocols for verification. IT staff should understand which technical indicators to investigate.
Managing Digital Footprints
Deepfakes are only possible when attackers have easy access to text, images, audio, and video samples they can use to train their models. So, it’s wise to reduce the amount of raw material available to attackers. Start by:
- Auditing Executive Exposure: Review what content exists online, that includes things like LinkedIn videos, YouTube presentations, and podcast appearances. Every public recording is potential training data for voice clones and face swaps.
- Creating Controlled Samples: Record executives in controlled conditions with consistent lighting and backgrounds. Use these verified samples as baselines to compare suspicious content against.
- Limiting High-Quality Sources: The better the source material, the more convincing the fake, so remove or restrict access to executive appearances in high-resolution photos, high-definition video, and long-form audio recordings.
Deepfake Red Flags for Your Human Firewall
Your employees are both a primary target for scams and a resource for deepfake creations, but they’re also an effective line of defense against deepfake attacks. To transform them from potential victims into effective enforcers, you need ongoing investment in their ability to question what they see and hear. So, teach employees to recognize manipulation attempts by showing them real examples. Train them to look for the subtle inconsistencies that point to synthetic deepfake content. These include:
- Audio: Unnatural pauses between words. Pitch that doesn't match the speaker's normal range. Background noise that cuts in and out. Breathing patterns that seem wrong.
- Visual: AI doesn't know how light and geometry work in the real world. In many deepfakes, the lighting is off, shadows don't cast in the right directions, and the perspective is distorted. Examine human faces and hands for oddities and inconsistencies, like eye movements that don't track naturally or facial features that blur during quick movements.
- Behavioral: Requests that bypass normal procedures, unusual urgency without clear explanation, communication from unexpected channels, and language that doesn't match the person's usual style are all signs that you should be on your guard.
Alvear also emphasizes the importance of teamwork: "Whenever something looks odd, or you have a hunch, asking, 'Should I really be trusting this person, clicking this link, or going into this website?'—especially if it is something you usually don't do on a regular basis—then involve security and other people. Involve more people so that the criteria of the masses will help you decide if the content is good or bad."
Deepfake Incident Response: When Prevention Fails
Even the best-prepared organizations will encounter deepfakes sooner or later. But the difference between a close call and a catastrophe often comes down to how quickly and decisively you act in the first few minutes after detection.
When someone raises the alarm, whether it's a suspicious video, a fake LinkedIn account, or a questionable voice message, the best response is swift, systematic, and coordinated.
Let’s take a look at how your deepfake response plan should work when an incident occurs.
Immediate Containment
When you suspect a deepfake attack, the number one rule is to move fast. Nico Alvear emphasizes that the nature of deepfakes requires immediate action to prevent them from causing widespread harm:
“They need to be found urgently so they don't go viral and cause serious damage,” he says.
“When something has already gone viral and is everywhere, you can try to prove to people and give evidence that something is fake, but they will still choose to believe whatever they want at that stage."
Hoskin agrees: ”Speed is going to be critical here, the first thing you want to do is limit the impact of the content, which means working with whatever platform or social media provider is hosting this content to get it taken down,” he says.
“Sometimes that content is spread across multiple different platforms, so you might need to get it taken down across many different sites."
What containment looks like in your deepfake incident response depends on the attack vector being used. It could involve one or a combination of these actions:
- Platform Coordination:
- Work with social media companies to:
- Remove malicious content.
- Lock down hijacked accounts or assets
- Prevent further distribution of suspicious content
- File takedown requests on every platform where the content appears
- Document all removal requests and responses
- Work with social media companies to:
- Freeze potentially compromised processes
- Stop any pending transactions
- Halt any policy changes
Attack Confirmation
Once you have limited the potential for damage, you can verify you’re actually being targeted by deepfake deception using protocols established in your deepfake response plan. This may include:
- Comparing the suspicious content against genuine samples
- Using technical detection tools
- Contacting the subject of the deepfake through a trusted channel
- Document all related communications for future reference
Crisis Communication
If a deepfake attack is confirmed, the next stage is to bring the narrative under your control before it controls you.
- Internal Communication: Getting everyone on the same page helps you coordinate an effective deepfake incident response:
- Alert your response team immediately
- Notify legal counsel for guidance on regulatory requirements.
- Brief executives on the situation and response plan
- Inform affected departments to keep them vigilant
- External Messaging: If the deepfake reaches customers or partners, communicate quickly but carefully:
- Acknowledge the incident without admitting fault
- Provide clear instructions on official communication channels
- Offer specific steps people should take if they were affected
Hoskin emphasizes the importance of customer communication: "Follow up with the people who have been impacted to tell them, 'This is a scam. If you're thinking about investing in it, don't,' and channel them towards the authorized places where they can get information.”
Evidence and Reporting
While your communications team manages the public narrative and your security team contains the immediate threat, it’s crucial not to overlook the importance of record keeping, even in the heat of a crisis. The evidence you collect will decide whether you can pursue legal action, claim insurance, comply with regulations, and, perhaps most importantly, prevent the next attack.
Here’s what you need to do:
- Preserve all evidence of the attack
- Create forensic copies of the deepfake media
- Document the distribution channels used
- Map the timeline from creation to discovery
Preserving and documenting the evidence internally is one side of the coin, you will also need to report the incident to the appropriate authorities. This may involve:
- Filing complaints with law enforcement, even if you doubt they can help
- Submitting regulatory notifications as required
- Sharing indicators with industry partners to prevent wider damage
Recovery and Improvement
Treat every deepfake incident, whether successful or not, as a learning opportunity that helps you shore up your defenses before the next attempt occurs. Here’s how to turn an unpleasant experience into invaluable intelligence:
Post Incident Analysis
- Conduct a thorough review without assigning blame. This should:
- Focus on system failures, not individual mistakes
- Identify the attack vector and examine how attackers acquired the source material
- Determine how the attack was finally detected
- Assess the business impact accurately:
- Calculate direct financial losses
- Estimate reputational damage through customer surveys
- Document operational disruption in hours lost
Strengthening Defenses
With your thorough documentation of the attack and post-incident analysis, you can start making improvements to your deepfake response plan:
- Update verification protocols based on what you learned: For example, if callback procedures failed, redesign them. If technical detection missed the fake, evaluate new tools.
- Revise training programs to address revealed weaknesses: Use the real-world incident as a teaching tool. Update simulation exercises to match recorded attack patterns.
- Harden vulnerable processes: Start by adding authentication steps to critical workflows and implementing time delays for high-value transactions. Introduce manual review requirements for unusual requests.
Continuous Evolution
Your organization’s experience is valuable, but as detection improves and awareness grows, attackers shift tactics. A key to staying ahead is by assuming yesterday's defense is today's vulnerability. So keep your deepfake incident response plan updated based on evolving realities. You can achieve this by:
- Tracking emerging techniques through threat intelligence
- Monitoring criminal dark web forums for new tools and techniques
- Studying public deepfake incidents to identify new patterns
- Partnering with external cybersecurity experts
Testing and Simulation
Your teams need plenty of practice to ensure they’re well-prepared for the new business reality of deepfakes. This can take the form of:
- Tabletop Exercises:
- Simulate a deepfake crisis (test whether your team follows protocols when the "CEO" demands immediate action)
- Run quarterly simulations with increasing sophistication
- Phishing Tests:
- Use synthetic media in security awareness tests
- Audit verification protocols for compliance
- Measure how many employees verify before complying
- Measuring Metrics:
- Track improvement over time:
- Monitor detection latency—how long before someone questions the content
- Track verification lag—time to complete callback protocols
- Document protocol adherence rates
- Track improvement over time:
Start Building Deepfake Resilience Now
An uncomfortable truth that many cybersecurity companies try to avoid is that perfect deepfake detection is impossible, and the lines between real and fake blur more each day. But your deepfake response plan can help you build resilience.
"Don't wait until a customer or client phones you up, saying, 'I just received a message from your CEO that suggested I do this, and I've done it',” Hoskin warns.
“Prepare for the risk ahead of time by getting the right tools and processes in place now so that you are protected when these risks start targeting your business."
ZeroFox is a digital risk protection platform that specializes in detecting and neutralizing sophisticated online threats across your external attack surface, from social networks and ecommerce marketplaces to the deep and dark web.
Where competitors fight a losing battle to identify deepfakes technologically, ZeroFox takes a different position, using semantic analysis to understand a perpetrator's intent and identify content that is designed to do harm. Instead of casting a wide-as-possible net and flooding you with irrelevant alerts, this approach prioritizes assets actually related to your company, brand, and personnel to provide targeted protection against emerging scams and malicious campaigns.
Hoskin outlines ZeroFox's three-pronged strategy with regard to deepfakes: "The number one way we help you is with improved discovery across channels: we help you find the potential threat as quickly as possible.”
“Number two is verification. We provide you with the information and context you need to decide whether or not something is fake.”
“Disruption is number three. We help you get it taken down, working with a whole range of partners to get content taken down as quickly as possible and at scale."
While customers currently benefit from the capabilities of ZeroFox's comprehensive digital risk protection, including real-time monitoring, impersonation takedowns, and brand safeguarding capabilities, the platform is continuing to develop next-generation defenses to stay ahead in this new reality where seeing is no longer believing. Watch this space for exciting news about advances in deepfake protection.
"Proactive protection is no longer a nice-to-have; it's a must-have,” Hoskin says.
"If you are still thinking about defending only what's inside your network perimeter and not aware of what's happening online, you are really only protecting half your business, because your brand is out there and the risk is out there right along with it, it’s as simple as that.”
Start learning how ZeroFox can improve your organization's deepfake incident response today. Because tomorrow, someone might start pretending to be you.