Blog

The Long Con Explained: How Romance Scams and AI-Enabled Social Engineering Steal Millions

by Maddie Bullock
The Long Con Explained: How Romance Scams and AI-Enabled Social Engineering Steal Millions
8 minute read

How emotional manipulation became the most profitable scam on the internet and why AI is accelerating the crime wave

Long-con romance scams work by building emotional trust over weeks or months, then using AI-generated messages, personas, and deepfake media to steal money or sensitive information. 

Let’s see how it works in practice.

Late one evening, a civilian contractor received a friendly message from a “U.S. Army captain” thanking her for supporting a veterans’ fundraiser she shared online. He was polite and thoughtful. He sent photos from a dusty base overseas. The messages moved to WhatsApp, followed by late-night conversations, then video clips from “secure comms” where the connection always seemed too unstable for a live call.

Over months, the relationship deepened. He remembered birthdays. He talked about returning home. Eventually, he needed help transferring funds before his redeployment. The link he sent looked official and even included a two-factor prompt.

By morning, her bank account was drained. The year-long relationship was fiction. Her losses, however, were very real.

This is the long con. It is the slow-burn variant of social engineering that does not rely on urgency or fear but on patience, familiarity, and curated emotional connection. And with generative AI powering scripts, photos, voices, and round-the-clock engagement, the long con is scaling faster than most organizations expect.

What Makes the Long Con Different from Other Social Engineering Attacks

Most phishing campaigns rely on quick reactions while the long con relies on a relationship.

Short-game scams try to trigger immediate action. Long-game scams patiently build trust and then wait for the victim to lower their guard. They unfold across weeks, months, or even years. The attacker’s goal is not to get a click today. It is to cultivate emotional credibility until verification feels unnecessary and skepticism fades entirely.

Research from the FBI’s Internet Crime Complaint Center (IC3) shows that romance scams are among the highest-loss cybercrimes annually, with victims losing more than one billion dollars per year worldwide. This figure is widely believed to be underreported, particularly in public-sector and military communities where victims may feel embarrassed or reluctant to disclose what happened.

The ZeroFox Intelligence team has documented a rise in long-term, relationship-driven scams that target individuals connected to:

The long con works because it targets trust, not technology. And AI has transformed that trust into a scalable asset for criminals.

Short Con vs Long Con: Two Different Psychological Traps

Not all social engineering attacks look alike. Short-term attacks rely on shock and speed. Long-term attacks rely on connection and credibility.

Short Con vs Long Con Comparison

Short ConLong Con
Built on urgency and pressureBuilt on patience and emotional investment
Crafted to spark a fast reactionDesigned to build attachment over weeks or months
Often ends in a single clickEnds in high-value theft or deep compromise
Relies on cognitive overloadRelies on emotional bonding
Usually short-term contactLong-term sustained engagement
Short cons manipulate stressLong cons manipulate vulnerability

Both attacks require different defenses. The short con demands skepticism. The long con demands awareness of emotional manipulation and identity authenticity.

Real-World Long Cons That Made Headlines

Long-con social engineering does not always make front-page news. Victims are often isolated and ashamed, and organizations may not classify these incidents as cyberattacks. Still, several cases reveal how dangerous, costly, and well-organized these scams have become.

CryptoRom Networks Targeting Professionals Worldwide

CryptoRom is one of the most documented romance-investment hybrid scams in recent years. It often begins with a casual message on a dating app. The persona is warm and financially savvy. Over time, the scammer introduces the victim to an exclusive crypto trading platform. Profits appear real. The dashboard updates are convincing. But behind the scenes, every aspect of the investment is fabricated.

Victims around the world have collectively lost hundreds of millions of dollars. Many believed they were building both a relationship and a financial future. CryptoRom demonstrates how emotional manipulation and financial fraud blend into a single long-term scheme. It also illustrates how global criminal networks coordinate scripts, personas, and fake platforms at scale.

Military Romance Scam Rings Identified by the U.S. Army CID

For over a decade, CID has reported thousands of complaints annually from people who thought they were speaking with deployed U.S. service members. Scammers claim they cannot access video calls for security reasons, cannot reach their bank accounts due to restrictions, or need help paying customs fees that do not exist.

Attackers use stolen or AI-generated photos to impersonate uniformed individuals, polished scripts that mirror deployment realities, and carefully crafted emotional narratives. These scams erode trust in military institutions and exploit the public’s instinct to support service members.

The OneCoin “Crypto Queen” Fraud Built on Personal Charisma

While not a classic romance scam, OneCoin’s global fraud relied heavily on emotional connection and charismatic influence. Victims believed in the persona of Ruja Ignatova and the empowering narrative she built around a nonexistent cryptocurrency. Behind the scenes, OneCoin was supported by a simple private database rather than a legitimate blockchain.

Experts estimate the victims of this scam could have lost as much as $16 billion worldwide. This case shows that long-con psychology works at individual and mass scales. Trust, once earned, becomes leverage.

Inside the Emotional Manipulation Playbook

Every long con follows a predictable psychological arc.

  1. Fast intimacy: The scammer expresses an instant, deep connection to the victim. It feels special and rare, which makes skepticism fade.
  2. Manufactured vulnerability: They share personal struggles and secrets that make the victim feel trusted and valued.
  3. Isolation: They discourage communication outside of chosen channels. They claim logistical or personal constraints to limit verification.
  4. The ask: The request for money or personal information is framed as temporary or compassionate. It feels like an extension of the relationship, not a financial crime.

AI enhances every step. It writes emotional scripts, predicts victim responses, and steers conversations with precision. And the manipulation works because it aligns with human nature. People want to help. Criminals twist that instinct into a profit engine.

AI as the Emotional Engine of the Long Con

Above all, the long con thrives on consistency. Scammers must maintain emotional continuity, remember details, and respond fast enough to sustain the illusion. Before generative AI, these constraints limited how many victims criminals could manage. AI removed those limits.

  • AI writes convincing emotional scripts. Threat actors scrape social profiles, dating app bios, and LinkedIn histories, then feed them into models that produce warm, customized communication.
  • AI creates entire identities, not just messages. Persona kits now include AI-generated images, fake military IDs, uniform photos, and short “video messages” that mimic bad connections to avoid live proof.
  • AI sustains dozens of conversations at once. Chatbots maintain rapport, escalate intimacy, reassure skepticism, and follow pacing scripts that tell attackers exactly when to introduce a request for money.
  • AI eliminates classic red flags. Bad grammar, inconsistent tone, and poor spelling once acted as natural deterrents. Those days are over.

In short, AI both speeds up long-con operations and perfects them. It gives criminals unlimited emotional stamina and the ability to scale intimacy as easily as sending a newsletter.

Why the Long Con Succeeds

Victims often wonder how they missed the signs. The truth is that the psychological mechanisms behind the long con are universal. They affect people of all backgrounds and education levels.

  • Cognitive load reduces scrutiny: People who are overwhelmed or fatigued are more susceptible.
  • Emotional priming speeds compliance: Romance scammers create conditions that suppress analytical thinking.
  • Authority bias makes military personas believable: Uniforms and rank symbols short-circuit rational evaluation.
  • Social conformity reinforces legitimacy: Victims see familiar stories and assume other people have experienced similar things.

The long con succeeds precisely because it feels real. It does not resemble a cyberattack. It resembles a relationship.

What To Do If Someone You Know May Be in a Long-Term Scam

Long-con victims rarely realize they are victims until the relationship breaks. Friends, coworkers, and family members often see warning signs first. Approaching the conversation requires tact, empathy, and facts.

Start with concern, not accusation

The Federal Trade Commission recommends beginning with open questions rather than confrontation. Ask if they have ever video-chatted with the person, whether the individual has asked for money, or if anything about the story seems inconsistent. These questions invite reflection rather than resistance.

Look for common red flags

Law enforcement and military agencies list several indicators:

  • Refusal to join live video calls
  • Claims of deployment or restricted access
  • Requests for secrecy
  • Requests for money tied to customs, leave, or emergencies
  • Profile photos that appear across multiple scam accounts

Document everything

FBI guidance encourages preserving messages, screenshots, transaction details, and profile links. Documentation helps victims recognize inconsistencies and supports reporting.

Report impersonations or fraud

Depending on the scam, victims should consider:

Encourage a temporary communication pause

Suggest that they avoid sending money or personal information until identity can be verified through a trusted channel.

Offer support without shame

Victims often feel embarrassed. Remind them that they were manipulated by sophisticated emotional tactics, not naive decision-making.

Helping someone disentangle from a long-con scam can be emotional for all parties. Try to stay objective and gather facts, examine patterns, and lead potential victims toward clarity at their own pace.

How ZeroFox Detects and Disrupts Long-Cons Targeting Organizations and Executives

ZeroFox approaches the long con like a detective following a pattern:

1. Persona footprint tracking

Our analysts map reused profile photos, usernames, email patterns, and linguistic markers across social platforms and dark web forums.

2. Impersonation detection at scale

AI-powered detection identifies look-alike accounts impersonating service members, government officials, or defense contractors.

3. Synthetic media forensics

ZeroFox flags AI-generated images, staged uniform photos, and deepfake videos that often accompany romance-scam personas.

4. Takedown orchestration

Once identified, ZeroFox works with platforms, registrars, and hosting providers to remove fraudulent accounts and domains quickly and efficiently.

5. Community-level threat intelligence

Because these scams target military and public-sector groups, ZeroFox distributes relevant intelligence to agencies and organizations that rely on early warning.

Defending against the long con requires visibility into the open web, dark web, and social platforms plus the ability to dismantle impersonations before they root themselves in a victim’s life.

Want the full investigation?

Download The Detective’s Field General Guide to Social Engineering to see how ZeroFox uncovers patterns, dismantles impersonations, and turns detection into disruption.

Maddie Bullock

Content Marketing Manager

Maddie is a dynamic content marketing manager and copywriter with 10+ years of communications experience in diverse mediums and fields, including tenure at the US Postal Service and Amazon Ads. She's passionate about using fundamental communications theory to effectively empower audiences through educational cybersecurity content.

Tags: Cyber TrendsDigital Risk Protection

See ZeroFox in action