OpenAI CEO Sam Altman Warns of “AI Fraud Crisis” – Here’s How Companies Can Fight Back

AI fraud crisis
IN THIS ARTICLE
Table of Contents
Like it? Share it

In a chilling warning that’s catching the attention of finance leaders worldwide, OpenAI CEO Sam Altman recently said what many in cybersecurity have feared for months: a global fraud crisis is looming, and AI is fueling it.

Speaking at the Federal Reserve earlier this summer, Altman highlighted the dangers of using outdated authentication methods, such as voice recognition, in an era where AI can perfectly clone someone’s voice.

“AI has fully defeated most of the ways that people authenticate currently, other than passwords,” Altman said. “I am very nervous that we have a significant, impending fraud crisis.”

He’s not wrong. And at Trustpair, we’re helping companies build effective defenses against this very threat.

Artificial Intelligence Fraud Is Already Here: From Phishing to Deepfake CEO Fraud

The rise of generative AI has created powerful tools for productivity, but also for deception. And finance departments are increasingly in the crosshairs. According to Trustpair’s 2025 US Fraud Study:

  • 90% of finance professionals have been targeted by cyber fraud in the past year.
  • 47% of targeted companies lost $10 million on average.
  • Gen-AI deepfake usage in fraud cases is up +118% year-over-year.

This isn’t theoretical. It’s already happening:

It’s getting harder to tell what – or who – is real, for businesses and individuals alike.

The Limits of AI in Fighting AI Fraud

Ironically, many companies are looking to AI to solve their fraud challenges. But that approach can backfire. While AI-powered tools can detect patterns, they often struggle with:

  • False positives that overload treasury teams.
  • Evasive fraud tactics that adapt faster than models can train.
  • Lack of context, especially in edge cases where human judgment is essential.

At Trustpair, we don’t use AI in our core evaluation engine, that determines if yes, or non, you should transfer money to a given bank account. Fraud prevention demands deterministic, auditable results. When you’re validating whether a vendor’s bank account truly belongs to them, “probably” isn’t good enough.

Get our full take on the AI Hype in Treasury in our latest white paper!

Trustpair’s Take: Altman’s Right, But There’s a Way Forward

We fully agree with Sam Altman: AI fraud is real, and companies are unprepared. But there is a clear path forward for those who want to stop fraudulent activity and avoid financial loss and damaged reputation.

The most effective way to prevent financial losses from deepfake scams, phishing, and fake vendor attacks? Continuous vendor account validation.

That means:

  • Monitoring changes to vendor bank data in real-time.
  • Validating third parties before payments go out.
  • Automatically flagging anomalies and blocking unverified accounts.

Even if a fraudster manages to impersonate someone and get an urgent transfer request through your internal process, or if employees click on malicious links and disclose sensitive information, Trustpair steps in to detect that the bank account is not legitimate, and blocks the payment before it’s too late.

No guesswork. No “AI magic.” Just reliable data and secure controls embedded where it matters most: your ERP, TMS, or procurement platform.

How Treasury and Finance Leaders Can Safeguard Against Financial Fraud

As Altman noted, it’s not just voice calls anymore: it’s video, email, and even real-time interactions that are indistinguishable from reality. AI technology is now so advanced, knowing if you’re talking to a human isn’t a given anymore.

The Arup example is a striking one: the scammers managed to create an entire fake video call, using recordings – voice, image – of executives found on social media. Only of of the call attendees was a genuine employee and when he was asked directly during the call to send an urgent transfer for a “special investment project”, he didn’t think twice and transferred funds to the criminals.

While policymakers begin to react, the private sector needs to act now.

To get ready:

  • Stop relying on static vendor data: it goes obsolete faster than ever.
  • Audit your payment process for blind spots in validation and approvals.
  • Implement continuous controls that verify vendors and accounts before every payment.
  • Don’t over-rely on AI tools without real-time, deterministic verification in place.

The future of fraud may be powered by AI, but the defense starts with strong fundamentals: clean data, secure processes, and human oversight.

In Summary: The Smart Way to Fight Generative AI Fraud

As OpenAI ramps up its presence in Washington and the world prepares for tighter AI regulation, companies don’t have the luxury of waiting. The risks are already here. Fraudsters are faster, smarter, and more sophisticated thanks to AI. But that doesn’t mean they’re unstoppable. At Trustpair, we’re helping companies take a proactive approach: by continuously monitoring vendor data and blocking suspicious payments before the damage is done.

  • No AI black box
  • No false confidence.
  • Just trusted third-party controls that protect your bottom line.

Request a demo to learn more about how to protect against AI fraud. 

New call-to-action

FAQ
Frequently asked questions
Browse through our different sections and find the answer to your question.

AI fraud is rapidly growing, with thousands of individuals and companies falling victim each year. As artificial intelligence tools become more sophisticated, bad actors use them to create highly believable schemes, making it harder for traditional systems – let alone human only systems – to identify patterns.

  • According to Trustpair’s 2025 US Fraud Study, 90% of finance professionals reported being targeted by cyber fraud at least once in 2024, which includes deepfake impersonations, AI-generated phishing emails, etc.
  • Sift’s 2024 activity report indicated that 72% of consumers noticed more AI spam or scams in the past year.

Yes, AI is being used by scammers to commit fraud – and more and more so. Cybercriminals are leveraging generative AI, language models, and deepfake tools to impersonate individuals, craft realistic phishing emails, and even fake voices or videos for identity theft and fraud schemes.

A 2025 report by Trustpair found that 90% of US companies have been targeted by cyber fraud at least once in 2024, with a 118% boost of AI-powered fraud. These AI applications enable bad actors to automate scams at scale, identify vulnerabilities, and mimic human behavior using AI-generated text and natural language processing.

As AI systems become more accessible, so does the potential misuse of this powerful technology, prompting leaders like Sam Altman, CEO of OpenAI, to warn of a looming “AI fraud crisis.” To counter this growing threat, companies must combine smart defenses like real-time account validation with strong internal controls.

You’d like these articles

Download our latest Ebook to uncover how AI is reshaping fraud—and how to fight back

Download our latest Ebook to uncover how AI is reshaping fraud—and how to fight back