ZeroFox Intelligence Flash Report - SEO Poisoning Abusing LLMs
|by Alpha Team

ZeroFox Intelligence Flash Report - SEO Poisoning Abusing LLMs
Product Serial: F-2025-07-24a
TLP:CLEAR
In this Flash report, ZeroFox researchers investigate the escalation in Search Engine Optimization (SEO) poisoning campaigns using novel tactics, techniques, and procedures (TTPs) to abuse artificial intelligence (AI) large language models (LLMs) in order to increase the credibility of search results.
Standing Intelligence Requirements
For the most up-to-date list of ZeroFox’s Intelligence Requirements, please visit:
https://cloud.zerofox.com/intelligence/advisories/14956
Link to Download
View the full report here
Key Findings
- ZeroFox has identified an escalation in Search Engine Optimization (SEO) poisoning campaigns using novel tactics, techniques, and procedures (TTPs) to abuse artificial intelligence (AI) large language models (LLMs) in order to increase the credibility of search results.
- ZeroFox assesses that threat actors are successfully tricking LLMs into believing these contact numbers and methods are credible by creating pages as questions, injecting them as PDFs into legitimate sites, and reposting them on long URL lists such as Pastebin and as comments on “crowd sourced” forums. The threat actors are purposefully exploiting the .gov and .edu domains due to their “reputation.” This is also being mirrored as comments on crowd-sourced forums like Goodreads or blog-style sites such as the ZohoDesk knowledge base.
- These campaigns are likely to ultimately lead users to divulge their personally identifiable information (PII), suffer monetary losses, and cause reputational damage to the original brand.
Tags: tlp:clear, threat actor, other