
The rise of deepfakes means the lines between reality and fiction are becoming increasingly blurred, and it’s getting harder to tell the difference by the day.
For example, when deepfake porn images and videos of Taylor Swift flooded onto social media platforms in early 2024, they were quickly identified as AI-generated hoaxes.
But deepfakes are improving rapidly, evolving away from obvious flaws, like images with six fingers, to subtle content that can trick most people, especially if they are not paying close attention.
Fast-forward to the Fall of 2025, and highly persuasive deepfakes are creating waves across media, business, and politics.
Even if your business hasn’t been attacked yet, deepfakes are so easy and cheap to produce that they’re becoming impossible to avoid. So, how do you deal with these new alternative realities and keep your brand, staff, and customers safe?
Read on to learn what our experts have to say about deepfake detection in cybersecurity and how to protect yourself and your business.
Deepfakes are videos, images, audio, or text created by AI to convincingly simulate real people without their consent. Bad actors without much technical know-how or expense can make deepfakes that say or do virtually whatever they want. They use deepfakes for scams, social engineering attacks, blackmail attempts, and a whole host of other activities that can damage reputations, cause financial loss, or even compromise national security.
Anyone connected to your organization could be the subject of deepfake content or the target of a deepfake scam, from C-suite executives to customer service reps and even their friends and relatives.
Any organization generating a reasonable amount of revenue is a target for threat actors using deepfakes, explains ZeroFox product management director Thomas Hoskin.
“If you have a high-profile executive or a spokesperson for your company, it is very likely that their image is going to be targeted,” he says.
“As it becomes cheaper and cheaper to produce more sophisticated videos, we are only seeing the number of threats increase and attacks are becoming more indiscriminate.”
Companies all over the world are seeing an explosion of deepfake cybersecurity threats. While there were thought to be only around 500,000 deepfakes in circulation in 2023, over the following year almost 50% of businesses were targeted by audio and video impersonations. By the end of 2025, the number of deepfakes is predicted to climb to 8 million.
So far, total related losses have reached $897 million, but that is rapidly mounting. After $360 million in damages throughout all of 2024, in just the first six months of 2025 there were $410 million in losses recorded, indicating that losses are on track to more than double since 2024. This same period also saw deepfake incidents outstrip the previous seven years combined by 171%.
To get a picture of where we’re headed, consider that Deloitte predicts that losses due to business email fraud alone could potentially surge to $11.5 billion by 2027 as deepfake AI tools help bad actors to target numerous victims simultaneously with minimal additional resources.
The speed at which deepfake threats are multiplying, and the projected losses, are staggering. But we say “deepfake” meaning a broad spectrum of technologies and synthetic media commonly generated using deep learning neural nets. Organizations that understand the technical nature of each type of attack can better calibrate their security measures and train their teams to spot the telltale signs of manipulation. Let’s take a look at how deepfake cybersecurity threats work.
The sheer technological variety of deepfake attacks can seem overwhelming, but analysis of actual incidents reveals that most attacks follow predictable patterns. By recognizing these typical threats, organizations can more efficiently prepare their defenses. To date, nearly 99% of the $897 million lost to deepfakes is via four types of fraud:
1. Famous fakes:
The most damaging involves criminals posing as celebrities to lure victims into fake investment opportunities, making up over $400 million in losses.
2. Corporate impersonations:
Fraudsters mimicking executives to authorize illegitimate fund transfers, ranks second at $217 million.
3. Biometric security spoofing:
Deepfakes used to get around biometric verification systems, for example to secure loans or steal data, have already generated $139 million in losses.
4. Romance-based scams:
Catfishers employing deepfakes to mislead targets into an online romantic relationship or to commit financial fraud have swindled $128 million out of victims.
But numbers alone can’t capture the full devastation caused by deepfake attacks. Behind every dollar lost is the human story of a retiree's life savings stolen, a company's reputation destroyed, or an executive's authority undermined. Read on to see how these attacks unfold in the real world.
As deepfakes are primarily attacks on human perception, trust, and judgment, we need to accept the fact that all of us are vulnerable to manipulation. Victims are no longer limited to the technologically naive or inattentive; increasingly, they include savvy professionals, established corporations, and even government officials who thought they knew better. Learning from the unfortunate experiences of others can help reinforce our own psychological defenses. Let’s explore the stories of how deepfakes are affecting real organizations and individuals.
A watershed moment for this particular risk involved videos of business magnate Elon Musk. As a high-profile financial and tech figure, Musk is particularly attractive to scammers looking to exploit trust and authority. These videos appear to be authentic investment advice from someone who is regularly in the public eye.
Hoskin explains, “Essentially, the pitch is: 'Come and invest in this platform, you're going to get guaranteed returns. Invest $5,000 this month and get $10,000 next month'.”
“It would be very easy, in my opinion, for somebody to look at one of those videos, not realize the video is fake, and fall victim to the scam.”
Recently, a large campaign used AI-generated footage to show Musk promoting a so-called “Quantum AI” investment platform, which lured people into fake transactions under the guise of guaranteed returns. Victims were often pressured by call-centre operators, then asked for additional funds as “fees” or “taxes” to access their supposed growing profits.
While no global total has been published, investigations reveal that victims have lost tens of thousands—in one Australian instance A$80,000—and that the scam network’s ad-spend ran into tens of thousands of pounds.
Deepfake celebrities are one thing, but what if someone deepfakes your C-suite?
Global engineering firm Arup fell victim to a highly elaborate deepfake attack that saw millions of dollars lost in one fell swoop. First, an employee received a transaction request that seemed to be from a CFO. Next, cybercriminals infiltrated an internal video conference using a cloned voice and images to impersonate the CFO. This deepfake CFO was made even more believable because it was backed up by scammers also impersonating other company employees on the same call.
Trusting the impostors were who they appeared to be, the employee authorized transfers totaling HK$200 million ($25 million). Once the scam was discovered, the company reported the fraud to the police, but no arrests have yet been made. The fallout eventually led to the resignation of Arup’s East Asia chair, Andy Lee.
High-level executives are also commonly impersonated to defraud customers — for example, someone in a Meta Ad claims to be an executive at a major bank and directs customers to invest in a fraudulent a WhatsApp investment group. What makes external attacks particularly insidious is that while the actual fraud often occurs entirely between the customer and the attacker, and the company may never directly interact with the threat actor, it still faces devastating consequences in terms of reputational damage, financial liability, and regulatory penalties.
Internal infiltration also involves bad actors using deepfakes to gain a foothold within organizations. For example, there have been several high-profile cases of interview and employment fraud, including North Korean hackers and coders successfully applying for and obtaining positions at major companies like Google and Meta. Once hired, they work those jobs while simultaneously stealing sensitive company information or installing malware.
Hoskin warns these types of deepfake breaches are going to become more common, putting a greater number of people at risk.
“The technology has been getting better and better, very rapidly, and we’re seeing threat actors target businesses at a wider and wider scale,” he says.
“In the next year or so, it’s pretty much going to get to a state where I think the majority of people won't be able to tell whether something is malicious or fraudulent without some kind of detection technology to help them.”
As damaging as these incidents are, direct financial harm to businesses and individuals is just one aspect of the emerging deepfake cybersecurity threat. In fact, the remaining two-thirds of cases involve manipulation that stretches from private spaces to the world stage.
Deepfakes are often highly targeted to manipulate, blackmail, or exploit a single victim.
Marilyn Crawford fell prey to an AI voice-cloning scam when fraudsters impersonated her grandson, claiming he needed $9,000 for bail. The AI-generated voice was convincing enough that Crawford agreed to transfer the money, scammers even sent a taxi to take her to the bank. Fortunately, an alert bank employee prevented her from completing the transaction and losing the money.
However, the authorities are not always so quick to intervene when deepfake cybersecurity threats are identified.
When Elliston Berry, a 14-year-old Texas freshman, discovered that deepfake nude images of her were being passed around her school, she dealt with intense distress and humiliation. It was bad enough trying to convince classmates that the images weren't real, but then her family hit a brick wall when they discovered schools, social media platforms, and even law enforcement were unprepared to address the situation.
“It took eight and a half months to get these images off Snapchat. That doesn’t wipe the photos off people’s devices,” she later told a U.S. Senate Committee.
“Every day, I will live in fear that these images will resurface, or someone could easily re-create.”
The production of non-consensual explicit material like this represents almost a quarter of all deepfake incidents. But in this case, Elliston and her mother fought back and went public with their story, meeting with lawmakers including Senator Ted Cruz and First Lady Melania Trump. The family’s pressure contributed to the introduction of the national Take It Down Act aimed at blocking deepfake images. While the creator of the fake nudes was eventually identified and charged, the emotional and reputational damage was already done. What’s more, many jurisdictions around the world still lack similar legislation to give victims justice.
At the other end of the deepfake spectrum, political manipulation using synthetic media makes up over one-fifth of cases. Bad actors design viral deepfake content to publicly shame and discredit individuals, divide communities, manipulate elections, move stock markets, or even foment tensions between countries.
In May 2025, the FBI released an alert warning of sophisticated scammers imitating high-ranking officials with text and deepfake voice messaging schemes. These campaigns appeared designed to trick other government leaders into revealing confidential information or transferring money.
One such target seems to have been Secretary of State Marco Rubio. In July 2025, unknown fraudsters were discovered using his identity and various deepfake techniques in attempts to communicate with senior U.S. and foreign government lawmakers.
Another notable case involved an unauthorized person gaining access to White House Chief of Staff Susie Wiles' phone to contact various senators, governors, and business leaders while masquerading as her, triggering investigations by both the White House and FBI.
This type of deepfake deception is now a global phenomenon. In Canada, cybersecurity and anti-fraud organizations warned that criminals were using AI technology to create deepfake government leaders for phone and text scams aimed at harvesting private data, stealing funds, or planting malicious software. Meanwhile, in Europe, Russian intelligence operatives have allegedly been posing as Ukrainian Security agencies to try to trick Ukrainian citizens into carrying out sabotage activities.
So far, all our examples illustrate the most common fear that comes with deepfakes — being tricked into believing something that isn't real. However, as Hoskin explains, there’s another edge to the problem of deepfake cybersecurity threats:
“I happened to see a video of the CEO of a bank saying, 'Cryptocurrency is going to be a big new thing, you should start investing in it now',” he says.
“As I watched it I was thinking, 'That's an obvious deepfake, can't people tell?' But, actually, that one turned out to be real!”
This "reverse gullibility" shows deepfakes are a corrosive force in more ways than one. As they weaken trust in all digital media, people may soon automatically ignore even authentic communications.
The U.S. President himself is now posting so many deepfake videos, even his own staff are unsure which are actually real events. This has led us to a situation where authentic presidential addresses broadcast from the Oval Office are widely dismissed as AI-generated deepfakes, prompting numerous rebuttals from the mainstream media.
As the chaos caused by deepfakes grows, many organizations will naturally turn to their existing security infrastructure for protection. Unfortunately, the security tools organizations have relied on for decades were never designed to combat threats that exploit human perception more than technical vulnerabilities.
Traditional security tools are built to scan bits and bytes and catch technical dangers — such as malicious code, suspicious network activity, or system intrusions. These threats leave detectable fingerprints like unusual signatures or error logs.
But there's nothing technically "wrong" with deepfake images, videos, or voice messages that security software can detect, they’re just normal media packets sent through regular channels. Your firewall sees legitimate traffic and your EDR solution sees a clean file, or maybe nothing, because the attack is outside your network, going viral on social media, scamming your customers on ecommerce platforms, or being traded on the dark web.
Manual methods such as visual inspection or using reverse-image searches, might be fine for checking a small number of deepfakes, but this approach can’t handle today’s volume of potential fake content.
Given the scale, speed, and complexity of the problem, it’s natural for companies to wonder if AI can be used to identify these AI deepfake cybersecurity threats?
ZeroFox AI product manager Nico Alvear agrees that this is a plausible use of the technology:
“AI can detect AI with certain accuracy, whether something is human or not human,” Alvear says.
This is because all of today’s deepfakes share the same flaw: while they generate an image or video that looks believable at first glance, the AI doesn't know how the physical world works.
“In many deepfake videos, everything is super crisp and well focused,” Alvear explains.
“Things that shouldn't be lit are lit, and shadows are not casting in the correct directions because current AI doesn't understand that yet.”
Some companies are hoping to use AI tools to look for these signatures of artificial creation and automate deepfake detection in cybersecurity.
However, anyone using these techniques will find it increasingly difficult to outpace the rapid advancement of the technology.
This is because many deepfakes are created by Generative Adversarial Networks (GAN) that continuously improve and refine their outputs.
A GAN is a machine learning system built around two AI models that evolve by competing against each other. It’s a struggle between a digital forger, called a generator, and an AI detective, called a discriminator. The generator’s goal is to create fake images, audio, or other content that looks convincing. When it starts out, it’s a complete beginner, producing noise that looks like garbage. Meanwhile, the discriminator gets fed both authentic content and the generator's creations and decides which is real and which is fake.
These two networks train simultaneously in a competitive loop, the generator creating fake samples and trying to fool the discriminator, while the discriminator evaluates everything and learns to spot the phonies. When the discriminator catches a fake, the generator learns from that failure and adjusts its approach to become more convincing. This back-and-forth continues through thousands or millions of rounds until the generator becomes so skilled that the discriminator can barely tell the genuine data apart from the highly realistic synthetic content.
“This is an arms race, as soon as you build an AI detector, it will be used to train better generators,” Alvear warns.
“So, even if you have a deepfake detector today, it will be used to train the next generation of deepfake generators, rendering the detector useless.”
Thanks to this feedback loop, it’s likely deepfake AI will soon learn how to render real-time 3D objects and make light bounce realistically, meaning AI tools looking for those tell-tale signs will no longer be useful.
“In my opinion, the war of generating AI detectors is already lost. So, we shouldn’t actually worry if something is a deepfake or not,” says Alvear.
“Generative AI itself is not the problem; the problem is when this tool or any media content is being used with malicious intent. That's why it makes more sense to focus on what the content is about.”
So, if you can’t rely on AI for deepfake detection in cybersecurity, how do you best protect yourself?
"Speed is going to be critical here. A lot of deepfake cybersecurity threats are designed to go viral and spread exponentially, the sooner you can find them, the faster you can take them down, and the more you can minimize the damage,” Hoskin explains.
“As soon as something is viral, there is not much that can be done to stop it,” agrees Alvear.
This is why the two biggest mistakes organizations are making right now is first, hoping disaster never strikes and second, failing to prepare for the worst.
“Don't wait until a customer or client phones you up, asking for help, saying, 'I just received a message from your CEO that suggested I do this, and I've done it',” Hoskin warns.
“You need to prepare for the risk ahead of time and already have the right tools and processes in place for detecting deepfake attacks as soon as they surface."
Hoskin and Alvear recommend creating what they call a playbook to address specific scenarios, particularly those involving impersonation of executives or authority figures.
The playbook should set out guidelines about how you are going to verify content and what actions you will take if you decide it is a risk.
"Having a playbook about what you do in each scenario, who should verify it and how you can verify it, is really important to ensure you are protected from fake requests," Hoskin says.
"This includes training so that any employees likely to receive those requests know how they should respond."
Once you have your playbook, how can you detect deepfake threats? Here are some of the key warning signs to help deepfake detection in cybersecurity:
So, perhaps the most powerful defense remains decidedly low-tech: human intuition. ZeroFox’s experts emphasize the importance of encouraging a culture of healthy skepticism:
"Whenever something looks odd, or you have a hunch, don’t feel wrong for asking, 'Should I really be trusting this person, clicking this link, or going onto this website?' — especially if it is something you usually don't do on a regular basis,” Alvear advises.
If something sets off alarm bells, don’t be afraid to get colleagues and security teams involved, and let the wisdom of the crowd help you decide if the content is good or bad.
Deepfakes fool eyes and ears, so detection has to look beyond pixels. The most reliable approach blends AI with behavioral analytics across social platforms, domains, apps and APIs, and criminal marketplaces. ZeroFox analyzes visual, audio, text, and network patterns together, then applies analyst review for nuanced calls. The goal is not to flag every deepfake. It is to surface the ones that can move money, move markets, or damage trust.
ZeroFox’s platform delivers defense against deepfake-driven threats by pairing large-scale collection with multimodal analysis and human expertise. Our AI inspects visual cues (face geometry, mouth-to-audio sync, lighting), audio signatures (prosody, cadence, timbre), and text semantics (claims, intent, calls-to-action) alongside network and behavioral patterns such as posting cadence, device and ASN clusters, shared wallets, domains, and API misuse. Analysts then validate edge cases and context. The result is fewer false alarms and faster focus on what can actually harm your brand, executives, or customers.
What we analyze:
ZeroFox prioritizes high-risk content first; an executive “wire transfer” video beats a celebrity meme every time.
ZeroFox analyzes potentially harmful content across multiple formats—written posts, video, and audio—to identify genuine threats to your organization. Rather than simply flagging every piece of suspicious content, the platform uses a contextual approach to intelligence to separate actionable risks from false alarms. This means ZeroFox experts examine the intent behind each threat to determine which materials could actually harm your reputation, leadership, or workforce, enabling security teams to focus on those that pose real danger. This triage model concentrates effort on synthetic content that is likely to trigger financial loss, regulatory exposure, or reputational damage, not every viral fake on the internet.
“ZeroFox is one of the only companies that can give you full visibility across the open, deep, and dark web, understand what's relevant to your business, and then raise from that content the things that are a genuine risk to your business,” Hoskin says.
“We give you alerts rapidly that are relevant to you so you can take action, cut through the noise, and deal with the things that are actually presenting a risk to your business. That reduces the number of customers harmed.”
Once you confirm a deepfake of your CEO is circulating on social media or a fraudulent ad is mimicking your brand to mislead customers, you need rapid remediation—and that's where takedowns come in. Because detection already links visual, audio, text, and network indicators to an actor and their infrastructure, ZeroFox can move from identification to validated takedown with fewer hops.
A takedown is the formal process of removing sites, profiles, and content that violate a network provider's terms of service. It’s one of the most effective weapons organizations have against deepfake-based fraud, impersonations, and brand abuse. But attacks spread quickly across platforms and countries, involving hundreds of hosting providers, social media networks, and domain registrars. Handling this task manually is close to impossible.
To solve this problem, the ZeroFox takedown solution combines automation, human expertise, and a global disruption network that identifies and removes threats at machine speed on a global scale. The system handles a comprehensive range of threat scenarios across platforms like X (Twitter), Meta, LinkedIn, YouTube, Telegram, and TikTok.
Here's how the process works:
Thanks to these and other capabilities, ZeroFox boasts a proven 98% success rate for executive/VIP, brand, and domain takedowns, executing 1 million+ takedowns annually.
But despite this outstanding performance, deepfake cybersecurity threats are rapidly evolving, so ZeroFox is busy preparing even more powerful solutions to future-proof the platform.
Even as cybersecurity companies focus on inventing new AI-powered methods to detect whether content is fake, threat actors keep working to make it harder to find synthetic signatures in their creations.
But unlike competitors who try to hit every deepfake nail with an AI-powered hammer, ZeroFox understands that deepfake detection in cybersecurity needs to balance the speed and scale enabled by technology with the skilled judgment of human professionals who can recognize subtle nuances and broad contexts.
“Ultimately, the combined expertise of humans empowered with AI is the most appropriate way to deal with deepfakes. If you rely solely on one, it’s never going to beat having both together,” says Hoskin.
ZeroFox’s approach is grounded in ethical, explainable AI. Every detection decision can be traced, validated, and reviewed by analysts to ensure accuracy, transparency, and accountability. This safeguards customers while maintaining trust in how intelligence is applied.This blended approach is not the only difference in how ZeroFox tackles deepfakes.
“At ZeroFox we are threat hunters, not AI content hunters. We’re not actually going to focus on whether the media is deepfake,” Alvear explains.
“The more important question is: ‘Does it contain a harmful message that could negatively impact our client?’ And therefore, do they need to know about it?” says Hoskin.
To achieve this means understanding whether a piece of content recommends an action that will damage a business, such as by affecting stock prices, harvesting credentials, or stealing financial assets.
ZeroFox is currently using this approach to develop a scalable and targeted collection and detection solution focused on specific deepfake threats like financial scams and deepfake nudity, among others.
“To solve the problem we’re going out and finding content, whether it is a deepfake or not, then checking if that content is about our customer,” Alvear says.
“We do this using methodologies like facial recognition, logo detection, voice clone matching as well as analyzing what the content is about semantically.”
“Then we put the data together to build a case that says, 'This is about you, and it is something harmful that you should look at.'”
Using this new method, ZeroFox can find and flag content abusing a customer’s brand to fool users, even if the brand’s name is not mentioned. The system might make the connection because a logo, app icon, or photo of their CEO is used alongside a message saying something malicious like 'Hey, DM me if you want to double your money’.
Watch this space for more news on ZeroFox’s pioneering approach to deepfake detection in cybersecurity.
Beyond identifying and taking down deepfakes, if we can no longer fully trust our eyes and ears, we must begin a fundamental rethinking of business security and authentication practices.
Organizations already use two-factor (2FA) and multifactor authentication (MFA) for sensitive transactions. These procedures will soon be necessary for all sorts of business interactions, even for video calls. While other potential solutions such as code words, callback procedures, independent confirmation channels, and time-delayed authorizations may seem frustrating, they’re necessary adjustments to a world where seeing and hearing are no longer believing.
But no matter what internal solutions you might introduce, you still need protection from deepfake cybersecurity threats in the third-party spaces outside your control.
“If you are still thinking only about protecting what's inside your perimeter and not protecting yourself elsewhere online, you are really only securing half your business,” Hoskin warns.
“External cybersecurity is no longer a nice-to-have; it's a must-have, because your brand, your employees, and your customers are out there on the web and that’s where the risk is.”
Don’t wait for scammers to target your customers, staff, or C-suite with deepfakes, contact us today or schedule a demo and discover how ZeroFox can help you monitor, detect, and neutralize threats across social media, the dark web, and all third-party platforms where your brand is at risk.