Deepfake fraud happens when scammers use AI to create fake videos or voices that impersonate trusted people. It’s now one of the biggest threats to businesses today, with many executives worried about the consequences of being impersonated using AI. Deepfakes are now so realistic that in 2024, an AI-generated CEO tricked the finance team at Arup out of £20 million via a fake video call.
Across the world, the rise of synthetic impersonation is reshaping corporate risk. Fraudsters are posing as government officials, recognisable vendors and even colleagues to dupe firms out of their hard-earned funds.
Businesses must now respond by identifying and closing their vulnerability and knowledge gaps. Learn which security measures to implement and how to validate with Trustpair to defend your business against deepfakes.
Deepfake Scam: Key Takeaways
- Deepfake fraud uses AI for synthetic impersonation via voice and video.
- Criminals exploit trust to commit vendor fraud, CEO fraud, and business email compromise.
- Businesses can defend themselves with stronger authentication, real-time vendor data validation, and fraud detection tools.
How does deepfake fraud works in six steps?
Deepfake fraud doesn’t always work exactly the same way, but in general, here are the steps that criminals take to make video deepfakes:
- Scammers determine their target: figuring out who to impersonate, typically based on seniority levels like a Chief Financial Officer, or even family members – any trusted individual
- Data collection from the target: as many pictures, voice recordings and even video calls as possible – those with public social media platforms are more likely to be targeted
- Media of the target and the scammer are fed into a deepfake artificial intelligence program together
- The algorithm compares the images of the two faces and compresses them into shared common features
- A second algorithm then recovers the faces, and trains the platform to overlay the target’s features onto the scammer’s face. This means that a scammer can chat in real time using synthetic videos while disguised as the target, making the scam more believable and adaptable, depending on the victim’s response
- To make voice deepfakes for phone calls, or overlay them on top of videos, cyberattackers: Take snippets of their target person’s voice, using cut-and-paste techniques or synthesise new audio with generative AI based on the target’s speech patterns, known as a voice clone
What fraud types are enhanced by AI deepfakes?
We’re seeing deepfake threats cropping up in specific patterns, almost exclusively where impersonation tactics already played a role in cyberattacks.
Scammers committing vishing, the act of calling someone pretending to be someone else, have been able to enhance the realism of their attacks through deep fakes. They gain access to systems or sensitive information in such scams, sometimes undiscovered for months.
Typically, deep fake vishing has been successful for attempts of:
- Vendor fraud: scammers impersonating known suppliers asking to reroute invoice payments or get access to your confidential systems through a fraudulent account
- CEO fraud: attackers impersonating senior employees with access privileges like a finance worker and pressuring colleagues into making payments or granting systems access.
- Business email compromise: accompanying phishing emails with pressure tactics by impersonating third parties like the government or financial institutions. This can effectively dupe workers into typing their credentials into a fake website for harvesting.
Whenever fraudsters use social engineering tactics over brute force attacks, deep fake videos and voices have the ability to make these scams more realistic, and therefore more effective.
Deepfake video call scams: a growing threat
The threat of deepfake technology is bigger than ever. According to Entrust’s Identity Fraud Report, this type of attack occurred once every five minutes in 2024.
In the age of misinformation, it’s difficult for businesses to verify the validity of what they’re seeing. But the window of realisation for this type of scam is much smaller than others, because it works in real-time, like on a video conference. Victims don’t often get the chance to take a minute and sense check whether the experience is real. In many cases, suspicions aren’t aroused at all. Without real-time verification systems, deepfake scams are more likely than others to work.
Similarly, suppliers and consumers may find it difficult to distinguish between deepfakes and real videos, creating significant impacts on the company reputation. Deepfakes can be constructed specifically to cause reputational damage, and videos are often intended to go viral to create noise that overrides the legitimate truth. Even when organisations put out notices that explain the deepfakes, it can be hard to cut through this noise.
All in all, it means that the growing threat of deepfake scams can’t be ignored.
Real-world example: Lastpass
Ironically, a company built on password security was able to thwart a targeted deepfake attack in 2024. The attackers reached out to a company employee at Lastpass impersonating the company’s CEO, using a combination of calls, texts and voicemails on Whatsapp.
Fortunately, the employee’s suspicions were alerted because:
- the attacks were outside of usual business hours
- the perpetrators used urgency tactics in an attempt to manipulate
- the phone number didn’t match the saved contact of the individual
This meant that the employee raised the alarms, reporting the messages for investigation without further incidents and protecting their financial accounts, operational systems and other sensitive information.
Real-world example: Martin Lewis
The Martin Lewis Money Show is a British institution, known for Martin’s honest financial advice. But consumers weren’t happy after a deepfake scammer impersonated the main man. In 2023, fraudsters used Martin’s image and voice, creating a video to dupe consumers into ‘investing’ into a ponzi bitcoin scheme.
Unfortunately, the scammers took advantage of Martin’s reputation as a trustworthy money advice service for the masses. They also carefully timed the advert close to Christmas, taking advantage of the most desperate in society. And the bitcoin was untraceable, meaning that it was unrecoverable for investigators.
One unsuspecting member of the public lost over £75,000 to the fake endorsement, and the scam significantly affected Martin’s reputation. Fortunately on this occasion, he had the good favour of the public long enough to explain that this was fraudulent, but not all companies do.
How to detect and prevent deepfake scams
Deepfake’s are one of the most difficult types of fraud to detect, largely thanks to their realism. In many cases, victims’ suspicions aren’t raised enough to perform any form of verification before they comply with the fraudsters’ requests.
Here are three suggestions for detection and prevention:
- Authenticate your interactions
- Trust your gut
- Have a safety net
Authenticate your interactions
Verification is truly the only way to authenticate that the person on the other end of the phone, or video call, is who they say they are. Unfortunately, fraudsters are increasingly bypassing account security systems like two-factor authentication and so executives must remain alert and consider themselves as the final keepers of the company vault.
When Ferrari was targeted by a deepfake attack in July 2024, the victim did something very clever. He asked the attacker, impersonating the Ferrari CEO, a question that only his real CEO would know the answer to: “What was the last book that you recommended to me?”. The scammer promptly hung up.
But while a random question worked for Ferrari, most companies can’t afford to be so reactive. Instead, build authentication into your processes and develop systems using the four eyes principle to tighten up your organisation against fraudsters.
Trust your gut
If your suspicions are raised during an interaction, listen to the feeling. Typically, while victims can’t quite point out what’s wrong, they know that something is. Urgency, pressure and social engineering are all tactics used by perpetrators. So if you feel that something is not quite right, take the time to step away, think and verify.
Have a safety net
Fraudsters tend to exploit weak or outdated vendor data to carry out these scams – create a built-in safety net by closing these data gaps. Use Trustpair to validate vendor data in real time, so that fake or manipulated banking details never slip through the cracks. Plus, the software continuously monitors vendor records to confidently flag suspicious changes before payments are ever made.
The future of fraud: are deepfakes unstoppable?
Deepfake technology may feel like an unstoppable force, but history shows that every new technology that fuels crime also sparks innovation in defense. While synthetic media will almost certainly grow more convincing and accessible, detection tools, regulatory frameworks, and public awareness are evolving in parallel.
In the end, the fight against deepfakes is less about stopping technology and more about strengthening human judgment and digital infrastructure. The winners of this battle will be those who embrace vigilance, collaboration, and innovation, using platforms like Trustpair to build resilience.
In summary
Deepfake fraud works by using AI to disguise a scammer’s face or voice, impersonating someone known to the victim. Protect against it by authenticating your interactions (even with people you know) and trusting your suspicions. Use Trustpair to provide a digital safety net, validating and monitoring the data to raise any red flags.