In a chilling warning that’s catching the attention of finance leaders worldwide, OpenAI CEO Sam Altman recently said what many in cybersecurity have feared for months: a global fraud crisis is looming, and AI is fueling it.
Speaking at the Federal Reserve earlier this summer, Altman highlighted the dangers of using outdated authentication methods, such as voice recognition, in an era where AI can perfectly clone someone’s voice.
“AI has fully defeated most of the ways that people authenticate currently, other than passwords,” Altman said. “I am very nervous that we have a significant, impending fraud crisis.”
He’s not wrong. And at Trustpair, we’re helping companies build effective defenses against this very threat.
Artificial Intelligence Fraud Is Already Here: From Phishing to Deepfake CEO Fraud
The rise of generative AI has created powerful tools for productivity, but also for deception. And finance departments are increasingly in the crosshairs. According to Trustpair’s 2025 US Fraud Study:
- 90% of finance professionals have been targeted by cyber fraud in the past year.
- 47% of targeted companies lost $10 million on average.
- Gen-AI deepfake usage in fraud cases is up +118% year-over-year.
This isn’t theoretical. It’s already happening:
- A finance worker from Arup was tricked by a deepfake CFO on a video call, transferring $25 million.
- AI-generated emails and ai-generated text messages now mimic executive language so well, they often pass undetected.
- Fraudsters are registering fake vendors with fabricated documents and fraudulent bank account numbers.
It’s getting harder to tell what – or who – is real, for businesses and individuals alike.
The Limits of AI in Fighting AI Fraud
Ironically, many companies are looking to AI to solve their fraud challenges. But that approach can backfire. While AI-powered tools can detect patterns, they often struggle with:
- False positives that overload treasury teams.
- Evasive fraud tactics that adapt faster than models can train.
- Lack of context, especially in edge cases where human judgment is essential.
At Trustpair, we don’t use AI in our core evaluation engine, that determines if yes, or non, you should transfer money to a given bank account. Fraud prevention demands deterministic, auditable results. When you’re validating whether a vendor’s bank account truly belongs to them, “probably” isn’t good enough.
Get our full take on the AI Hype in Treasury in our latest white paper!
Trustpair’s Take: Altman’s Right, But There’s a Way Forward
We fully agree with Sam Altman: AI fraud is real, and companies are unprepared. But there is a clear path forward for those who want to stop fraudulent activity and avoid financial loss and damaged reputation.
The most effective way to prevent financial losses from deepfake scams, phishing, and fake vendor attacks? Continuous vendor account validation.
That means:
- Monitoring changes to vendor bank data in real-time.
- Validating third parties before payments go out.
- Automatically flagging anomalies and blocking unverified accounts.
Even if a fraudster manages to impersonate someone and get an urgent transfer request through your internal process, or if employees click on malicious links and disclose sensitive information, Trustpair steps in to detect that the bank account is not legitimate, and blocks the payment before it’s too late.
No guesswork. No “AI magic.” Just reliable data and secure controls embedded where it matters most: your ERP, TMS, or procurement platform.
How Treasury and Finance Leaders Can Safeguard Against Financial Fraud
As Altman noted, it’s not just voice calls anymore: it’s video, email, and even real-time interactions that are indistinguishable from reality. AI technology is now so advanced, knowing if you’re talking to a human isn’t a given anymore.
The Arup example is a striking one: the scammers managed to create an entire fake video call, using recordings – voice, image – of executives found on social media. Only of of the call attendees was a genuine employee and when he was asked directly during the call to send an urgent transfer for a “special investment project”, he didn’t think twice and transferred funds to the criminals.
While policymakers begin to react, the private sector needs to act now.
To get ready:
- Stop relying on static vendor data: it goes obsolete faster than ever.
- Audit your payment process for blind spots in validation and approvals.
- Implement continuous controls that verify vendors and accounts before every payment.
- Don’t over-rely on AI tools without real-time, deterministic verification in place.
The future of fraud may be powered by AI, but the defense starts with strong fundamentals: clean data, secure processes, and human oversight.
In Summary: The Smart Way to Fight Generative AI Fraud
As OpenAI ramps up its presence in Washington and the world prepares for tighter AI regulation, companies don’t have the luxury of waiting. The risks are already here. Fraudsters are faster, smarter, and more sophisticated thanks to AI. But that doesn’t mean they’re unstoppable. At Trustpair, we’re helping companies take a proactive approach: by continuously monitoring vendor data and blocking suspicious payments before the damage is done.
- No AI black box
- No false confidence.
- Just trusted third-party controls that protect your bottom line.
Request a demo to learn more about how to protect against AI fraud.