Blog

Shadow AI: Continuous Threat Exposure in the AI Era

by Maddie Bullock
Shadow AI: Continuous Threat Exposure in the AI Era
9 minute read

First it was shadow IT: employees signing up for unapproved SaaS apps. Now it’s shadow AI: employees turning to tools like ChatGPT and Copilot to get work done faster, often with critical company data in tow. As Forbes points out in its coverage of shadow AI, this issue includes everything from pasting sensitive data into generative AI platforms to relying on unsanctioned coding assistants.

That hidden productivity boost comes at a price, and the risks are no longer theoretical. According to IBM’s Cost of a Data Breach Report 2025, 20% of organizations suffered a breach tied to shadow AI last year. Those incidents carried an added $670,000 in breach costs, often because they exposed customer PII and intellectual property across unmonitored environments.

While AI is novel, the concept of the threat is not. As Josh Mayfield, Senior Director of Product Marketing at ZeroFox points out, “We saw the same thing with the cloud 15 years ago. The desire to implement the technology outweighed the perceived risk. We later course corrected and started to improve cloud infrastructures and all the security tools flooded the market.”

He continues,  “With cloud security, it was top down. But the interesting thing about AI is it’s bottom up. It doesn’t require expertise. Back in the day when you had a cloud instance, it was a dev that was working with it. There was always a technical doorway that the technology came through. With AI, there’s thousands of doorways that can open into your environment.” 

What looks like innovation under the radar quickly becomes a liability. This is not simply an IT issue. Shadow AI poses a governance challenge that can ripple into compliance, reputation, and at the end of the day, shareholder value. The message is clear: ignoring shadow AI means ignoring a rapidly growing exposure vector. Organizations need visibility, validation, and continuous remediation, the core of a Continuous Threat Exposure Management (CTEM) strategy.

What is Shadow AI?

Shadow AI is the use of unapproved or unsanctioned AI tools inside an organization creating hidden cybersecurity risks that expand the attack surface. Just as shadow IT described employees spinning up cloud apps without IT’s knowledge, shadow AI is the same behavior applied to generative AI and machine learning. And employees are using AI at an increasing rate, with the US National Cybersecurity Alliance (NCA) reporting that 38% of employees share sensitive data with AI platforms without employer approval. 

Examples include:

  • Pasting sensitive customer data into ChatGPT or Gemini to draft emails or analyze trends
  • Developers using unvetted coding assistants to generate production code
  • Marketers uploading customer lists into AI-driven content platforms
  • Business units relying on AI analytics tools without security review

With the push to utilize AI systems at many organizations, using an AI tool may look like a productivity win for employees. In reality, unsanctioned AI tools create blind spots for IT and security teams. According to IBM’s 2025 report, 63% of breached organizations lack AI governance policies, and 97% of AI-related breaches happened in systems without proper access controls. 

The result? Shadow AI isn’t just making individual shortcuts, it’s creating invisible attack surfaces across enterprises. Think about unmonitored data leaving the perimeter, AI supply chain vulnerabilities through plug-ins and APIs, and sensitive information feeding back into external models with no oversight. In other words, an expanded attack surface your adversaries are eager to exploit.

Why Shadow AI is a Cybersecurity Risk

Employees likely brush off shadow AI as a harmless shortcut. But it’s a blind spot with serious consequences. As Josh notes, “AI can’t really do anything meaningful if you don’t give it something to work with, which is all of your sensitive data.” 

So, people don’t feel they’re sharing sensitive information because they aren’t uploading corporate documents. But in the information economy, what’s most useful to a learning language model (LLM) is your inputs.

Josh explains, “You gave your ideas over, and intellectual property is going to be the most valuable resource in the coming decade. Proprietary information that you have is going to be the one thing that an AI model does not have. We’re already giving it away through these shadow use cases. So someone may even be following organizational policy to the letter, but just the interaction itself is a learning mechanism that we’re handing over to the AI.”

Security Risks with Shadow AI

When employees use unapproved AI tools, they introduce a range of shadow AI cybersecurity risks that create a perfect storm for enterprises:

  • Data exfiltration: Sensitive customer information, financial records, or proprietary code can slip outside the perimeter the moment it’s pasted into an AI tool. And because many AI models memorize their training data, threat actors can sometimes coax that information back out later.
  • Compliance violations: Regulations don’t care if exposure was accidental. Unsanctioned AI use can instantly put your organization out of compliance, triggering fines and reputational fallout.
  • Expanded attack surface: AI plug-ins, APIs, and SaaS-based models open new entry points that attackers can exploit, especially when IT doesn’t even know they’re in use. Some attackers even hide malicious instructions inside ordinary documents (a tactic called prompt injection), which can trick AI systems into leaking data or rewriting outputs.
  • Weaponized AI: The same tools employees use to boost productivity are also available to adversaries, who are leveraging generative AI to automate phishing, impersonation, and fraud at scale.
  • Poisoned knowledge: If attackers insert forged or malicious content into the sources your AI systems rely on, it can spread bias, misinformation, or dangerous outputs at scale.
  • Embeddings reversal: Even when sensitive data is converted into embeddings, attackers may be able to reverse-engineer those mathematical representations and reconstruct the original information—like piecing a shredded document back together from confetti.
  • Backdoored models: Fine-tuned or third-party AI models can carry hidden triggers that only activate under specific conditions, causing them to leak data, sabotage outputs, or spread disinformation, essentially acting as sleeper agents inside your systems.

The result is a growing category of “unknown unknowns”. AI is like a water supply: poison just a few sources of knowledge, and the contamination can spread everywhere. These are exposures your security team can’t monitor, measure, or remediate with traditional tools. 

Josh reminds us, “With AI, it’s a separate entity crunching our data and our information, our ideas. Making sure there’s security around that interaction is the most important thing.” Without visibility, organizations are effectively handing adversaries a new attack surface to exploit.

Case Study: Samsung’s Shadow AI Wake-Up Call

In April 2023, Samsung quietly lifted a ban on generative AI tools only to trigger a security crisis shortly after. Amid engineers' attempts to modernize development workflows, they uploaded proprietary code into ChatGPT, inadvertently exposing internal IP to external servers. The company’s alarm bells sounded when multiple instances of source code leakage surfaced, prompting swift action.

The next day, Samsung issued an internal memo banning employee use of AI tools like ChatGPT, Google Bard, and Bing-based generative services on company devices and networks. The memo referenced the accidental code leak and emphasized the difficulty of retracting or controlling data once it leaves internal systems—even urging that failure to comply could result in disciplinary action.

This incident was more than a PR blip. It spotlighted the growing blind spot created by shadow AI. Despite its promise to boost productivity, Samsung’s case shows how unchecked use of AI without governance can bypass every control in place. Hours of unmonitored interaction with external AI models can expose sensitive IP, breach compliance policies, and erode enterprise trust. It’s a lesson in why organizations need structured oversight, to continuously identify, prioritize, validate, and remediate AI-related risks before they escalate.

Get the external cybersecurity interactive guide.

How CTEM Addresses Shadow AI

Traditional security tools weren’t built to spot the hidden use of unapproved AI. That’s where Continuous Threat Exposure Management (CTEM) comes in. CTEM is designed for exactly this problem: the exposures you don’t know about until it’s too late. By continuously identifying, validating, and remediating risks, CTEM closes the gap between shadow AI activity and security oversight.

Here’s how the CTEM cycle applies to shadow AI:

  • Identify: Surface unapproved AI usage and data leaving the perimeter, from sensitive prompts in generative AI tools to risky plug-ins and integrations.
  • Prioritize: Assess which exposures matter most—like compromised PII, intellectual property, or API vulnerabilities—and rank them against business impact.
  • Validate: Test how attackers could exploit shadow AI inputs or outputs, ensuring you’re focusing on real threats, not noise.
  • Remediate: Take swift action, from disabling unapproved tools to removing leaked data and neutralizing impersonation or phishing campaigns fueled by AI.
  • Monitor continuously: Because shadow AI isn’t a one-time problem. New tools, plug-ins, and use cases emerge daily, and CTEM ensures you’re always a step ahead. This is especially critical as many organizations experiment with AI agents that can take direct actions in systems. With too much power and too little oversight, these agents create systemic risk if hijacked. CTEM helps surface and control those pathways before they spiral.

With ZeroFox, CTEM is powered by a unified platform that combines Digital Risk Protection, Threat Intelligence, and External Attack Surface Management. That means not just visibility, but the ability to act: analyst-vetted alerts, proactive takedowns, and real-world disruption of adversary infrastructure.

Get a  ZeroFox  platform demo.

How to Combat Shadow AI at Your Organization

Shadow AI is a fast-moving risk that’s already costing enterprises money, data, and trust. Employees may see unsanctioned AI tools as shortcuts or necessary tools. But without oversight, they quickly become liabilities, exposing sensitive data and creating openings adversaries can exploit. However, there are concrete steps you can take today to reduce the risk:

  • Establish clear AI governance policies: Define what tools are approved, how data can be used, and where the red lines are.
  • Educate employees: Make sure teams understand the risks of pasting sensitive data into public AI models.
  • Monitor for unapproved AI usage: Gain visibility into which tools are being used across your network and supply chain.
  • Secure the AI supply chain: Vet plug-ins, APIs, and third-party AI services before they’re adopted.
  • Adopt a CTEM framework: Continuously identify, validate, and remediate hidden AI exposures before adversaries exploit them.

However, as Josh puts plainly, “Make sure your data security is locked down. Certain things just can’t be shared—with another person or an AI. Make sense of all the different ways the data can be used or culled.”

Then, once your data policies are in place, “monitor for triggers, because at some point they will be violated. It could be that it’s not even the end user who’s doing it, but the AI is piping data somewhere else.”

The key message? Organizations that act now with AI governance—including shadow AI cybersecurity strategies—will be the ones that innovate securely, protect customer trust, and keep adversaries on the defense. 

With a helpful analogy, Josh shares some final thoughts: “History offers a useful comparison [to shadow AI]. In the early days of electricity, improvised wiring in buildings sparked fires until standards and grids emerged.” Shadow AI is at the same turning point. Governance and visibility are the safeguards that prevent sparks and unlock AI’s real promise.

Ready to shine a light on your organization’s expanding attack surface? See how ZeroFox delivers.

Maddie Bullock

Content Marketing Manager

Maddie is a dynamic content marketing manager and copywriter with 10+ years of communications experience in diverse mediums and fields, including tenure at the US Postal Service and Amazon Ads. She's passionate about using fundamental communications theory to effectively empower audiences through educational cybersecurity content.

Tags: Cyber Trends

See ZeroFox in action