Skip to content
Compliance & Risk

Scams aren’t just fraud — they’re engineered to exploit human nature

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 7 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 7 minute read

Scams aren't just getting smarter; they're getting personal and exploiting the one vulnerability we can't patch: human psychology. With nearly 1-in-4 adults already losing money to scams, the question isn't whether you're smart enough to spot them, but whether our systems are designed to protect us when emotions override logic

Key insights:

      • Traditional fraud breaks systems; scams break people — Scams directed against individuals weaponize trust, urgency, and emotion and hit victims when they’re stressed or distracted.

      • Nearly 1-in-4 adults have lost money to scams, and that number is climbing — Criminals now wield deepfakes, voice cloning, and AI to make their pitches eerily convincing — and the curve is still bending in their favor.

      • By the time someone reaches the payment screen, manipulation has already won — Real protection means flagging suspicious outreach early, verifying identities in real-time, and building friction into high-risk transactions, all before emotions override logic.


One of the most fundamental distinctions in financial security is this: Every scam is a fraud, but not all fraud is a scam. During this week’s International Fraud Awareness Week, it’s worth pausing to note what makes scams different — and why that difference matters more than ever in 2025.

Traditional fraud typically exploits weak systems, such as stolen credentials, manipulated data, or technical vulnerabilities. Scams, on the other hand, exploit something far more powerful and harder to patch — human nature itself. Scams can weaponize trust, urgency, and emotion; and when those psychological levers are pulled at just the right moment, even savvy people can find themselves wiring money to someone they’ll never see again.

The threat is only growing

The numbers tell a sobering story. More than 1-in-5 (22%) of adults report losing money to scams, according to the Global Anti-Scam Alliance. And Ayelet Biger-Levin, founder of RangersAI and creator of ScamRanger, a technology designed to stop scams before they happen, doesn’t mince words about the growing threat of scams: “From a numbers perspective, scams are on the rise,” she says. “They’re going to continue to rise because criminals are becoming more sophisticated, leveraging the latest technology advancements including large language models (LLMs) and AI agents to scale operations.”

Indeed, her definition cuts straight to what makes scams unique. “A scam is social engineering to convince an individual to either disclose personal information or transfer money directly to a criminal,” she explains, adding that it’s not a system breach; rather, it’s a conversation that goes wrong — often in ways the victim doesn’t realize until it’s too late.

And the trajectory isn’t encouraging. Biger-Levin says that she expects that the number of adults being victimized over the next 12 to 18 months will only increase. “In the US, I expect it to rise,” she notes. “Criminals are rapidly leveraging tools that make scams more believable such as deepfakes and voice cloning, which are used for impersonation to increase both scale and success.”

And while we haven’t reached the tipping point yet, the curve isn’t bending in our favor.

Scams adapt to every new channel we create

Here’s the uncomfortable truth: Scams aren’t a glitch in the system; rather, they’re a feature of human society that adapts with every new communication channel we build. Romance scams, investment lures, fake shopping sites, cryptocurrency schemes — these aren’t amateur operations anymore. They’re often run by organized networks, sometimes operating out of compounds in Southeast Asia, and they’re supercharged by technology that makes deception easier and more convincing than ever.

Deepfakes can put your CEO’s face on a video call. Voice cloning can mimic a family member in distress. Increasingly, agentic AI can personalize phishing at scale, crafting messages that feel eerily tailored to your life. Educating people about ways to keep from becoming victims helps, absolutely. However, when a persuasive story lands at exactly the wrong moment — when you’re stressed, distracted, or emotionally vulnerable — logic often takes a back seat.

And if those fighting fraud are waiting until a victim reaches the payment screen to intervene, they’re already too late.

Meeting manipulation where it starts

To make real progress, we need to meet manipulation at first contact — the moment persuasion begins. That means pairing human-centered design with protective technology across the entire scam lifecycle.

What does that look like in practice? It means flagging risky outreach before it reaches an inbox. Verifying websites and identities in real time, in context; and slowing down high-risk payments while prompting users with friction that feels helpful, not punitive. And critically, it means sharing signals and liability across the ecosystem — among banks, telcos, social platforms, and regulators — so they can all work from the same playbook.

The constant in all of this is human psychology. The variable is how well our systems anticipate it.

Biger-Levin says she is optimistic about enforcement improving over time. “I do predict that long-term, these scam compounds are going to be taken down,” she says, adding that she’s also realistic about what comes next. “Criminals are not going to stop there, and by using advanced technology will continue to attack individuals. The one common denominator, though, is human psychology, and that is something we can tackle and protect with the right consumer empowerment in place”

That’s the core challenge. Regulators or financial services compliance agents can shut down a scam operation, but they can’t patch human emotion. Technology solutions must be designed around how people actually think and behave under pressure — not how we wish they would. That means building systems that recognize when someone is being groomed, when urgency is being manufactured, and when trust is being weaponized.

The old advice still holds… because it reflects how we think

There’s a reason the classic warnings never go out of style. The old saying of, If something seems too good to be true, it probably is, is not outdated wisdom — it’s a reflection of how scams work by promising outsized returns, instant solutions, or emotional rewards that bypass our rational filters.

Gut checks still matter, Biger-Levin reminds us, adding that doesn’t mean we can rely on individuals to shoulder the entire burden of vigilance, especially when criminals are using industrial-grade tools to manipulate them.

Scams will always evolve. So, the question isn’t whether they’ll disappear — they won’t. The question is whether we’re willing to build systems smart enough to protect the humans inside them.

That means reducing exposure at the source, disrupting grooming tactics before they gain momentum, and making the this doesn’t feel right moment easier to spot — and safer to act on. It also means treating scam prevention not as a user education problem, but as a systems design problem.

We can bend the curve, but only if we stop treating scams as individual failures and start treating them as the systemic, technology-enabled threats they’ve become. The tools already exist; however, the challenge is coordination, accountability, and a willingness to bake protection into every layer of the digital experience.

Because the denominator isn’t changing and human psychology remains constant, the aspect that we can change is how well our systems anticipate it — and how much harder we make it for criminals to exploit it.

Staying ahead of the scammers

To stay ahead of these scammers, organizations and consumers should take practical steps to prevent and minimize risks. For example, they should stay up to date on the latest scam tactics by keeping an eye on consumer protection updates. These can help you spot red flags, such as urgent demands or unusual payment requests, that may signal a scam.

Also, when you receive unsolicited calls or emails, take a moment to verify their authenticity. Instead of responding right away, contact the organization directly using official contact information. Legitimate companies typically won’t ask for sensitive information like passwords or account details out of the blue.

Finally, boost your digital security by using strong, unique passwords and enabling two-factor authentication. Be cautious when clicking links and avoid those that seem suspicious. Scammers often rely on high-pressure tactics to prompt rushed decisions; so by taking a step back and evaluating the situation carefully, you often can avoid falling prey to their schemes.


You can find out more about how businesses and individuals are navigating fraud schemes here

More insights