Hackers and cyber criminals no longer look like characters in science fiction movies like The Matrix. They don’t hide behind hoodies, green text or zero-day exploits. Instead, they increasingly look like you or me, or your trusted bank, logistics partner, or a credit card or financial services representative.
Looking back on 2025, the most consequential cyber incidents of the year were not defined by technical breakthroughs. They didn’t hinge on attackers breaching enterprise perimeters or outsmarting advanced detection systems. Instead, they succeeded by doing something far more subtle: exploiting trust.
From stolen shipments of $400,000 worth of lobsters destined for Costco to Coinbase absorbing a $400 million hit after attackers leveraged stolen customer data for social engineering, as well as the watershed social engineering hack of British retail giant Marks & Spencer (M&S), the cybercrime pattern this year was unmissable. They were failures of assumptions about identity, legitimacy and how “safe” normal business processes really are.
The year’s biggest corporate scams were triggered by emails, messages, calls and profiles that looked right to succeed. The result was a series of incidents that exposed blind spots enterprises can no longer ignore in 2026. After all, whenever businesses give fraudsters an inch, they typically end up taking a mile.
Read also: Third-Party Risk and AI Gave Cyberattacks the Upper Hand in 2025
The Shift From Technical Compromise to Trust Exploitation
For years, the security industry has framed phishing as a technical problem: malicious links, weaponized attachments, spoofed domains. Controls followed suit—email gateways, URL rewriting, attachment sandboxing. Those tools still matter.
But the most damaging scams of the past year didn’t rely on malware delivery or credential harvesting; they relied on persuasion. What made these scams so effective was not perfection, but plausibility.
The countless behaviorally engineered phishing emails, messages, calls and profiles deployed by fraudsters in 2025 weren’t flawless, but they didn’t need to be. They only needed to look right enough to pass human scrutiny around right sender, right tone, right timing, right request.
Crucially, many of the past year’s most damaging cyber events originated from legitimate systems using legitimate data.
In both the Coinbase and the lobster heist case, attackers didn’t invent trust from scratch. They leveraged data stolen from the company itself, and trucks matching the right logistics carrier, to launch highly credible social engineering attacks against customers, both online and in real life.
In the case of the Marks & Spencer attack, which forced the retailer to suspend online orders, contactless payments and click-and-collect services for weeks, the attackers tricked service desk personnel into resetting credentials, taking advantages of soft spots in human and procedural weaknesses.
The PYMNTS Intelligence report “Vendors and Vulnerabilities: The Cyberattack Squeeze on Mid-Market Firms” found that vendors and supply chains are in many cases the most vulnerable—and targeted—underbelly of mid-market defenses, with 38% of invoice fraud cases and 43% of phishing attacks stemming from compromised vendors.
And with AI accelerating the realism, scale and personalization of these social engineering interactions, attackers are increasingly able to execute and orchestrate fraud attacks at scale with a speed and precision even The Matrix couldn’t have predicted.
See more: Cargo Theft Goes Digital as Cybercrime Invades the Supply Chain
How AI Has Changed the Fraud Equation
AI didn’t invent social engineering, but it has helped to industrialize it. Attackers have become adept at using AI to research targets, mimic writing styles, localize language, adjust tone and maintain consistent personas across email, SMS, voice and even video. The result is a class of scams that can feel less like attacks and more like routine business.
The enterprise attack surface now includes psychology, process design and organizational incentives. It can also include personal-life factors, such as social media profiles, that are designed to probe business vulnerabilities. PYMNTS Intelligence recently found that 21% of Gen Z consumers reported falling for scams initiated through social media platforms.
A PYMNTS Intelligence report, “From Spark to Strategy: How Product Leaders Are Using Gen AI to Gain a Competitive Edge,” finds that more than 3 in 4 product officers (77%) are using generative artificial intelligence for cybersecurity.
Ultimately, the future of cybersecurity will not be decided solely by better detection algorithms or stronger encryption. It will be decided by how well organizations learn to defend the invisible layer where technology ends and trust begins.