SEO Poisoning: How Threat Actors Are Tricking AI Models like ChatGPT, Gemini, and CoPilot
by Maddie Bullock

You trust your search results. And you probably trust your AI assistant, too. But what happens when both are being manipulated?
SEO poisoning—a long-standing black hat tactic where cybercriminals game search engine results to trick users into clicking malicious links—isn’t new. What is new is how threat actors are now using this same strategy to poison AI-powered tools like Gemini and Copilot. Effectively, they’re training large language models (LLMs) to resurface fake information as if it’s credible.
This is not just a clever scam targeting users. It’s a growing threat to organizations and brands. Especially as people increasingly turn to LLMs for fast answers to high-stakes questions like “How do I contact customer support for [Your Brand]?”
In our recent ZeroFox Intelligence Flash Report, our expert analysts break down how this new wave of SEO poisoning works, how LLMs are being exploited, and what your security team can do to stay ahead of the threat.
Check out the report, SEO Poisoning Is Abusing LLMs, for in-depth analysis. Or, read on to learn more about the basics of SEO poisoning and why LLMs are the newest target.
What is SEO Poisoning?
SEO poisoning is a technique used by cybercriminals to manipulate search engine results and trick users into clicking on malicious links. By stuffing content with high-ranking keywords, publishing on spammy sites, or hijacking legitimate domains, threat actors make their dangerous pages appear at the top of search engines like Google and Bing.
Once a user clicks, they’re often led to:
- Phishing sites that steal credentials or PII (personally identifiable information),
- Pages that download malware,
- Or fake customer support portals that push scams or fraudulent charges.
This tactic is far from theoretical. According to CISA’s StopRansomware Guide, SEO poisoning is a known threat vector that “leads unsuspecting users to phishing sites, malware downloads, and other cyber threats” simply by appearing trustworthy in search results.
Historically, these campaigns targeted trending topics (think tax season, concert ticket sales, or breaking news) to drive traffic. But in today’s AI-powered world, threat actors have found a new way to scale their impact: by targeting the very tools people use to summarize the web.
The AI Twist: How LLMs Are Being Abused
Search engines aren’t the only ones being gamed. LLMs like ChatGPT, Copilot, and Gemini are also being tricked into amplifying malicious content. All thanks to the very tactics that make SEO poisoning so effective.
LLMs rely on massive datasets scraped from across the internet. To determine what’s trustworthy, many models weigh where information comes from just as much as what it says. That’s where threat actors are getting crafty.
Here’s how the new SEO poisoning playbook works:
- Fake contact information is packaged as a helpful Q&A or support guide.
- It’s uploaded to trusted domains like .edu and .gov sites, often as PDFs, which are easy to index but harder to monitor.
- The same content is reposted across forums like Goodreads or dumped in long URL lists on Pastebin to boost visibility.
- From there, it gets picked up by LLMs and cited as legitimate in AI-generated responses.
As the ZeroFox Intelligence team reports: “Due to the quantity of these posts, the legitimacy of the sites where they are being hosted, and the multiple environments they are being shared across, LLMs are interpreting them as ‘legitimate’ phone numbers and contact mechanisms.”
The result? A simple AI prompt like “How do I contact [Brand] customer service?” may return a fake phone number. It’s generated with confidence, but backed by poisoned sources.
Because LLMs are still learning to distinguish between reputation and manipulation, this tactic is working. It’s an evolution of traditional SEO poisoning, but now with the scale and trust of AI behind it.
Real-World Examples of SEO Poisoning: Fake Numbers, Real Consequences
This isn’t a theoretical threat or a scare tactic against using AI models. It’s happening now, and it’s working.
In our recent investigation, ZeroFox analysts searched for a contact number related to a major travel brand. One of the top search results led to a PDF file hosted on a University of Michigan share drive. The file contained falsified support details, including a fake phone number. To the average user (and even to an LLM) that looks like a credible source.
But the deception didn’t stop there.
ZeroFox uncovered dozens of similar PDFs across multiple legitimate university and government domains, including:
- intelligencestudies.utexas.edu
- mycehd.tamu.edu
- gamingcontrolboard.pa.gov
- forms.business.columbia.edu
These files were often formatted as questions, like: “How do I speak to someone at [Brand]?”
Then, they were injected into .edu and .gov upload environments, reposted across social forums like Goodreads, and listed in long Pastebin URLs designed to be indexed by search engines…and scraped by AI models.
One number appeared across multiple sources and was ultimately resurfaced by Gemini and Google as a legitimate contact method.
As ZeroFox Intelligence puts it, “By creating pages as questions, injecting them as PDFs into a mix of .gov and .edu websites, and using ‘crowd sourced’ forums like Goodreads…this campaign tricks the LLM into believing it is credible data.”
The consequence? Users call fake hotlines, provide personal information, and may suffer financial loss. All while the real brands and organizations take the reputational hit.
Why This Is a Big Deal: The Risk to Users and Brands
Much like all social engineering methods, SEO poisoning has always relied on trust. A trust in search engines, trust in top-ranking results, and now, trust in AI. That’s exactly what makes this new wave so concerning.
By poisoning both search indexes and the data that fuels large language models, threat actors are effectively rewriting what’s “true” online, creating AI misinformation. That has serious implications for:
- Everyday users who are increasingly turning to tools like ChatGPT or Copilot to ask high-intent questions like “What’s the number for [Company’s] product support?” When AI serves up fake data, users are both scammed and lose confidence in the tools themselves.
- Legitimate brands whose reputations are hijacked to lend credibility to fake support sites and phone numbers. Even when the scam is someone else’s doing, the customer frustration (and blame) lands squarely on the real company.
- Trusted institutions like .edu and .gov domains, which are being exploited as credibility boosters for malicious campaigns, diminishing their authority and reliability in both human and AI eyes.
The bottom line? This is more than a cybersecurity or LLM developer issue. It’s a trust crisis unfolding in real time.
When AI systems begin surfacing poisoned content as fact, the ripple effect extends far beyond individual scams. It erodes user confidence, undermines brand integrity, and chips away at the public’s ability to trust the digital tools they rely on every day.
What Your Security Team Can Do Now
As SEO poisoning tactics evolve, security teams need to shift from reactive cleanup to proactive monitoring. Especially as AI tools begin to echo and amplify malicious content.
Here’s where to start:
Monitor for SEO Poisoning Indicators
Be vigilant for any .pages[.]dev domains and look out for long-tail keywords in Pastebin domain lists like:
- how-do-I-contact-BRAND.pages.dev
- ways-to-connect-BRAND-customer-service-via-phone.pages.dev
Track uploads to public-facing platforms, especially .edu, .gov, and .help domains with open upload capabilities. A sudden spike in PDF or document uploads may signal abuse. Additionally, you can leverage a Digital Risk Protection solution to detect suspicious brand mentions, URLs, and impersonation attempts early.
Audit LLM Mentions of Your Brand
Test popular large language models (like Copilot, Gemini, and ChatGPT) using customer-service-related prompts. If incorrect numbers or contact paths are returned, you may already be caught in a poisoned loop. A brand monitoring tool can help you proactively spot incorrect content that can lead to AI-generated misinformation.
Collaborate Internally and Externally
Loop in IT admins, brand teams, and third-party hosting partners when you spot poisoned content. Especially if that content lives on infrastructure you don’t directly control. The faster you escalate, the easier it is to contain reputational fallout.
Get Help from Experts
ZeroFox combines threat intelligence, takedown services, and AI-driven detection to expose and disrupt threats like SEO poisoning campaigns before they spread. Our platform surfaces malicious content across search engines, the deep and dark web, forums, and public data repositories, while our in-house analysts validate and act fast to protect your brand, executives, and customers.
Interested in learning more? Book a demo with one of our experts to see how ZeroFox’s unified platform and expert intelligence services can help protect your brand from evolving AI-enabled threats.
Maddie Bullock
Content Marketing Manager
Maddie is a dynamic content marketing manager and copywriter with 10+ years of communications experience in diverse mediums and fields, including tenure at the US Postal Service and Amazon Ads. She's passionate about using fundamental communications theory to effectively empower audiences through educational cybersecurity content.
Tags: Cyber Trends, Phishing, Threat Intelligence