Blog

Inside the Criminal Mind: How Social Engineering Attacks Really Work in the Age of AI

by Maddie Bullock
Inside the Criminal Mind: How Social Engineering Attacks Really Work in the Age of AI
6 minute read

Until recently, social engineering relied on generic templates like bulk phishing with typos and awkward grammar. They were obvious mistakes. Mistakes that automatic filters, vigilant users, or basic skepticism could catch. But now, artificial intelligence has changed the game.

  • Spear-phishing at scale: With generative AI, attackers can craft hundreds or thousands of individually tailored phishing emails in minutes. Tone, title, role, even internal company jargon can be mimicked. This personalization dramatically increases click rates.
  • Hyper-realistic vishing and voicemail attacks: Deep-learning models can now clone voices and generate convincing voice prompts or voicemail messages. Threat actors have used this tactic in high-impact BEC and ransomware cases.
  • Persistent follow-up and engagement: Rather than a single email and forget, AI can nurture conversations. Reminders, follow-ups, urgency re-assertions—all without the fatigue a human threat actor would have. Machines don’t need sleep.
  • Bypassing traditional filters: Because AI-generated messages are grammatically correct and contextually appropriate, they can evade spam filters configured to catch poor grammar or suspicious formatting.

Social engineering is no longer a craft of chance. Going into 2026, it has become industrialized, automated, and nearly impossible to stop without widespread awareness.

Social Engineering Breaches That Hit Hard: Real Cases, Real Losses

MGM & Caesars 2023: A casino empire brought down by social engineering

In September 2023, the cybercriminal group Scattered Spider exploited human trust—not a software flaw—to infiltrate two of America’s largest casino operators; MGM Resorts International and Caesars Entertainment. The attackers called a third-party IT vendor, posing as legitimate staff, and gained access to network credentials through vishing. From there they moved laterally, deployed ransomware, and exfiltrated data from thousands of systems. The resulting disruption shut down casino floors, disabled room-key systems, and took down reservation infrastructure, costing MGM up to USD 84 million in lost revenue and emergency remediation costs.

This wasn’t a phishing email or a zero-day exploit. It was voice-based manipulation, using impersonation, social engineering, and human error as the entry point.

Healthcare fallout, 2024: Social engineering cripples patient services nationwide

In February 2024, Change Healthcare—a critical vendor for much of the U.S. healthcare system—was hit by a breach tied to social-engineering attack vectors. The ripple effects disrupted eligibility processes, patient claims, and drug authorizations, impacting hundreds of hospitals across the country. While follow-up reviews are ongoing, the incident underscored how social engineering against one third-party vendor can cascade into a national public-health crisis.

These high-impact breaches illustrate a clear truth: once attackers breach the human perimeter, the scope of damage—financial, operational, reputational—is limited only by imagination.

UK Retail Sector 2024–2025: Look-alike sites and multi-channel phishing campaigns

Between late 2024 and early 2025, major UK retailers faced coordinated social engineering attacks that blended spoofed delivery alerts, fake payment-verification pages, and cloned customer-service portals. In one Marks & Spencer incident, attackers used convincing look-alike domains to harvest customer and employee credentials, then attempted to pivot into internal systems. The ZeroFox Intelligence team also observed similar campaigns across the broader retail sector that combined phishing emails, SMS lures, and supplier impersonation. The result was a surge in account takeovers and customer-service fraud attempts that forced retailers to issue warnings, reset credentials at scale, and tighten verification processes across channels.

These operations capitalized on brand familiarity and seasonal shopping urgency. By manipulating trusted workflows and customer expectations, attackers turned routine interactions into high-impact fraud attempts that targeted both consumers and staff. 

Workday Breach 2025: Payroll access compromised through HR-themed phishing lures

In 2025, attackers targeted Workday customers by impersonating HR and payroll departments and directing victims to a fraudulent login page. The spoofed site captured employee credentials that were later used to attempt unauthorized changes to payroll routing details. The ZeroFox Intelligence team identified this campaign as part of a broader trend in which threat actors exploit routine HR communications to bypass technical controls and access sensitive financial systems.

The incident demonstrated how easily criminals can weaponize familiar business processes, particularly those tied to pay cycles and financial urgency. When the lure mirrors a familiar workflow, employees are more likely to comply, giving attackers a direct path to payroll data and financial manipulation.

Why Social Engineering Works (Even on Security-Savvy Professionals)

Considering the cases above, you may be asking yourself, how could people fall for these scams? Well, humans are wired to take mental shortcuts. Consider how impossible it would be to drive your car if you had to think through every muscle movement and identify every single thing you saw outside each window. This is why we fall back on heuristics, those mental triggers that help us make decisions quickly. 

Social engineers understand these cognitive shortcuts—especially prevalent when people are busy, tired, or under pressure—and exploit them deliberately. For example:

  • Cognitive load and fatigue: When employees are juggling deadlines, they are far less likely to scrutinize even slightly unusual emails.
  • Emotional priming: Fear, urgency, or flattery inject an emotional weight that pushes victims toward action before reasoning can catch up.
  • Authority bias: We obey perceived authority or hierarchy, especially if the request is framed as urgent or confidential.
  • Social conformity: If we believe everyone else has done something, we follow. Even without verifying.

Add to that the fact that AI-generated messages mimic tone, grammar, and even internal company context, and suddenly the “someone will notice” excuse doesn’t hold.

ZeroFox Recommendations for Defending Against Social Engineering Attacks

Whether you’re a security leader, an everyday employee, or part of a SOC team, these are the guardrails built for a world where persuasion is part of the attack vector.

  • Trust nothing at first glance. Always verify sender, source, and story. Take a moment before a click to check email addresses, domain spelling, and sender history.
  • Interrogate emotion before action. If a message triggers fear, urgency, flattery, or greed, pause. Ask: What am I being persuaded to do, and why now?
  • Verify authority, question urgency. Just because it appears to come from leadership doesn’t mean it’s real. Use out-of-band verification like a known phone number, a separate chat thread, or direct in-person confirmation.
  • Automation helps, awareness wins. AI may power phishing, but awareness, context, and human intuition remain the strongest defense.
  • Disruption is the new deterrence. Share suspicious emails, report impersonations, and escalate early. The sooner a suspicious asset is flagged, the less damage it can cause.

How Security Teams Can Build a Stronger Human Firewall

  • Phishing-resistant MFA is necessary but not sufficient. Use adaptive authentication, but combine it with behavioral analysis and anomaly detection—especially for high-privilege accounts.
  • Simulations are useful, but contextual training is critical. Generic phishing tests don’t replicate the emotional tricks used in real attacks. Train around scenarios that involve urgency, requests for legitimacy, and impersonation.
  • Adopt external attack surface monitoring. Impersonation often begins far outside your network on social media, third-party vendors, or forum pages. Monitoring those vectors is mission-critical.
  • Establish a “verification-first” culture. Normalize double-checks for payments, link clicks, and unusual requests. Reward hesitation and celebrate skepticism.

Why Social Engineering Awareness Matters More Than Ever

According to the latest report from the Anti‑Phishing Working Group (APWG), Q2 2025 alone saw over 1.13 million reported phishing attacks worldwide. That’s a 13% increase from the previous quarter.

Meanwhile public-sector and vendor-targeting social engineering operations continue to escalate, often with ripple effects across entire industries. Worse yet, these are not isolated incidents. They are symptoms of an evolving ecosystem. It’s one where trust is traded like currency, and patience is weaponized.

If you think your organization is too large, too secure, or too sophisticated to be targeted, think again. The most dangerous false assumption in cybersecurity is believing you are not interesting enough to attack. Social engineers do not hunt for difficulty. They hunt for opportunity.

Want the full playbook?

Download the Detective’s Field Guide to Social Engineering to learn how ZeroFox uncovers patterns, dismantles impersonations, and helps organizations turn detection into disruption.

Maddie Bullock

Content Marketing Manager

Maddie is a dynamic content marketing manager and copywriter with 10+ years of communications experience in diverse mediums and fields, including tenure at the US Postal Service and Amazon Ads. She's passionate about using fundamental communications theory to effectively empower audiences through educational cybersecurity content.

Tags: Cyber TrendsPhishing

See ZeroFox in action