Back to blog
Social Engineering12 min read

Understanding Social Engineering Fraud: The Psychology Behind the Scam

Social engineering fraud doesn't start with malware. It starts with a feeling. Urgency, trust, curiosity. Here's how attackers weaponize human psychology, and what the research actually says about why it works.

In early 2022, an engineer at Sky Mavis, the studio behind Axie Infinity, received a LinkedIn message about a job opportunity. The recruiter seemed legitimate. The company seemed real. The engineer went through multiple rounds of interviews, and when the final offer arrived as a PDF, he opened it. That document contained spyware.

Within weeks, North Korea's Lazarus Group had used that foothold to compromise four of Sky Mavis's validator nodes on the Ronin Bridge, plus a fifth through a stale permission that had never been revoked. On March 23, 2022, they drained 173,600 ETH and 25.5 million USDC. The total: roughly $625 million. Nobody noticed for six days.

No smart contract was vulnerable. No private key was brute-forced. The largest crypto hack in history at that point started with a fake job listing and a PDF. The attacker didn't need a zero-day. They needed one engineer to feel flattered by a recruiter's attention.

That's social engineering fraud. And it's the most underestimated attack vector in cybersecurity.

What Social Engineering Fraud Actually Is (and Isn't)

Social engineering fraud is the act of manipulating people, not systems, into performing actions or handing over information that benefits an attacker. It's not hacking in the Hollywood sense. There's no green terminal scrolling through binary. It's a conversation. A well-crafted email. A phone call that feels just slightly off.

Robert Cialdini, the psychologist whose 1984 book Influence essentially wrote the playbook that modern social engineers follow (knowingly or not), identified six principles that govern human compliance: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. Every successful social engineering scam exploits at least two of these simultaneously.

The FBI's Internet Crime Complaint Center reported that business email compromise, one specific flavor of social engineering, accounted for $2.9 billion in losses in 2023 alone. That's more than ransomware. More than any other cybercrime category. And those are just the cases people reported.

The Kill Chain: How Social Engineering Scams Actually Work

Attackers don't improvise. They follow a structured methodology that can be broken into distinct phases, what security practitioners call a "kill chain." Understanding this chain is the first step to disrupting it.

Phase 1: Reconnaissance

Before an attacker ever contacts you, they've already spent hours (sometimes weeks) studying you. This is the OSINT (Open Source Intelligence) phase. They're on LinkedIn, X, GitHub, Telegram groups, on-chain explorers, and company websites. They're building a dossier.

What they're looking for: organizational hierarchy (who reports to whom), communication patterns (does your CEO send Telegram messages at 11 PM?), personal interests (did you just tweet about a conference?), and technical details (what tools does your team use?).

In crypto, on-chain data makes this phase devastatingly effective. An attacker can see your wallet balances, transaction history, governance votes, and protocol interactions. They know what you hold, where you hold it, and when you move it.

Phase 2: Pretexting

This is where the story gets written. The attacker constructs a believable context, a pretext, for making contact. It might be a fake job recruiter, a protocol team member reaching out about an "integration," a journalist requesting a comment, or a fellow developer asking about a shared repository.

The key insight from social psychology research is that pretexting works because of what Daniel Kahneman calls "System 1" thinking: fast, automatic, intuitive processing. We don't critically evaluate every incoming message. We pattern-match. If the pretext matches a known pattern (boss sends urgent request, recruiter reaches out on LinkedIn), we default to compliance.

Phase 3: Engagement and Exploitation

Once rapport is established, the attacker makes their move. This could mean getting you to click a link that installs a credential harvester, convincing you to share a screen during a "debugging session," extracting just enough information to bypass your 2FA, or persuading you to sign a transaction on a spoofed interface.

The attack itself is often shockingly brief. According to research by Proofpoint, the median time from initial contact to credential theft in targeted phishing campaigns is under three minutes. Attackers deliberately compress the decision window. The less time you have to think, the more likely you are to rely on emotional shortcuts.

Phase 4: Extraction and Erasure

Funds are drained, data is exfiltrated, and the attacker disappears. In crypto, this is instant and irreversible. The stolen assets get routed through mixers, cross-chain bridges, and freshly deployed smart contracts designed to obscure the trail. By the time you realize what happened, the money is three hops deep into an obfuscation pipeline. There's no chargeback. There's no bank fraud department to call.

The Psychology No One Talks About

Most social engineering content on the internet will tell you to "stay vigilant" and "verify before you trust." That advice isn't wrong. It's just incomplete. Because it ignores the uncomfortable truth: vigilance is a finite cognitive resource.

Cognitive Load and Decision Fatigue

Research from the Ponemon Institute found that employees who process more than 50 emails per day are 4.3x more likely to click a phishing link than those who process fewer than 20. It's not that they're less intelligent. It's that their critical thinking capacity is depleted. Social engineers know this. That's why phishing campaigns are overwhelmingly launched between 9-11 AM and 2-4 PM, the windows of maximum workplace cognitive load.

Authority Bias

Stanley Milgram's obedience experiments from the 1960s demonstrated that roughly 65% of participants would administer what they believed to be dangerous electric shocks simply because an authority figure instructed them to. The same principle applies when your "CEO" Slacks you at 7 PM asking for a fund transfer. You don't push back on the boss, especially when the request comes wrapped in urgency.

The Reciprocity Trap

In DeFi and crypto communities, this one is particularly lethal. An attacker spends weeks being helpful in a Discord server: answering questions, sharing resources, building reputation. Then they DM you with a "heads-up about a vulnerability" and a link. You click it because they've already deposited social capital. You feel you owe them trust. This is Cialdini's reciprocity principle weaponized. You're more likely to comply with a request from someone who has already done something for you, even if that "something" cost them nothing.

Dopamine and the Gamification of Deception

Here's something that doesn't show up in most security literature: social engineering runs on the same dopamine loops that make games addictive. If you've ever studied game design or gamification, you know that the most effective engagement mechanics are built on variable reward schedules, loss aversion, and time pressure. Slot machines use them. Mobile games use them. And so do social engineers.

Think about how an airdrop scam works. You get an unexpected token in your wallet (surprise reward, dopamine spike). The token has a name that implies value. You go to a website to "claim" it (goal-directed behavior, anticipation loop). The site asks you to approve a contract (the trap). The whole sequence mirrors a video game loot drop: random reward, time-limited window, clear call-to-action. Your brain processes it the same way it processes a rare item drop in a dungeon crawler. The excitement overrides the skepticism.

Loss aversion plays a huge role too. "Your account will be locked in 24 hours" triggers the same psychological mechanism that makes limited-time events in games so effective: the fear of missing out on something you already feel entitled to. Game designers call it FOMO by design. Social engineers just call it Tuesday.

Insider Threats: The Attacker You Already Trust

Not all social engineering comes from outside. Insider threats, meaning employees, contractors, or community members who abuse their legitimate access, are among the most difficult to detect and the most damaging when they strike. According to the Verizon Data Breach Investigations Report, insider threats account for approximately 20% of security incidents, and their median damage is significantly higher than external attacks because the attacker already knows the layout.

In crypto organizations, where small teams handle large treasuries and operational security often relies on personal trust rather than formal controls, insider threats are uniquely dangerous. A developer with commit access, a community moderator with admin privileges on a Discord, or a team member with partial multisig authority. Each of these represents a potential social engineering vector that bypasses every perimeter defense.

Real Examples of Social Engineering Scams in Crypto

These aren't hypotheticals. They're patterns we've observed and studied.

The Fake Audit Partner

An attacker impersonates a well-known smart contract auditing firm. They contact a protocol team claiming to offer a discounted follow-up audit. The "onboarding form" they send is a credential harvester. Because the email domain uses a near-identical lookalike (think audltfirm.com instead of auditfirm.com), and the team had recently worked with the real firm, it sails past the gut check.

The Governance Proposal Trap

Attackers submit a legitimate-looking governance proposal with an embedded link to a "discussion forum." The link redirects to a page that requests a wallet signature, ostensibly to "verify your voting weight," but actually approves a token allowance to the attacker's contract. Because it appears within the familiar context of governance participation, even experienced DeFi users have been caught.

The Long-Con Community Member

An attacker joins a project's Discord three months before the attack. They contribute to discussions, help newcomers, and build genuine social capital. Once trusted, they DM core team members with a "security advisory" containing a trojanized PDF. This is patient, deliberate social engineering, and it works precisely because the attacker invested real time in building credibility.

Why "Just Be Careful" Doesn't Work

The standard advice (use 2FA, don't click suspicious links, verify identities) is table stakes. You should absolutely do all of it. But framing social engineering defense as a personal responsibility problem is like telling people to dodge bullets. The attacks are designed by professionals to defeat human judgment.

What actually moves the needle is systemic thinking:

  • Map your attack surface before the attacker does. If you don't know what information about you or your team is publicly accessible, you can't defend against its weaponization. Run an OSINT assessment on yourself.
  • Design processes that assume compromise. No single person should be able to approve a fund transfer, deploy a contract, or change access controls unilaterally. If your security model relies on one person making the right call under pressure, it will fail.
  • Train with realistic simulations, not slide decks. People don't learn to recognize social engineering by reading about it. They learn by experiencing it in a controlled environment where the stakes aren't real but the adrenaline is.
  • Monitor your social graph continuously. New breach data is published weekly. People change jobs, post new content, join new communities. Your exposure surface shifts constantly. A point-in-time assessment is a snapshot. You need a live feed.

The Uncomfortable Truth

We spend millions on smart contract audits, bug bounties, and cryptographic security. And then a $625 million theft happens because an engineer opened a PDF from a LinkedIn recruiter.

Social engineering fraud exploits the one vulnerability that can't be patched: human trust. That doesn't mean it can't be defended against. It means the defense has to be as sophisticated as the attack. Grounded in psychology, informed by real threat intelligence, and operationalized as continuous practice rather than a one-time checklist.

The organizations that survive this aren't the ones that never get targeted. They're the ones that made the attacker's playbook obsolete before it was ever run.


Descry Research

Threat intelligence and adversary research from the Descry team.

Descry simulates the attacker targeting your organization. Our AI agents map your social graph, rehearse kill-chains, and deliver the adversary playbook before a real threat actor does.