We all like to think we can recognize AI-generated content. However, one finance team member in Hong Kong learned otherwise when asked to transfer $25.6 million to “his CFO” in the UK. Despite any initial doubts, he dismissed his suspicions upon recognizing the voice of the CFO and others on the request call. Unfortunately, the voice had been cloned using advanced AI technology, and the worker was deceived.
Protecting against AI voice scams requires a combination of training and technology, and solutions like Trustpair’s platform can help. Enhancing your identity verification and payment controls will help mitigate business risks, even in the face of advanced fraudsters. Contact us to learn more!
What are AI voice scams and how do they work?
AI voice scams, also known as voice phishing or vishing, involve impersonating a trusted person’s voice. Once the victim receives the call and recognizes the voice, they are more likely to comply with any request from the scammer.
In personal settings, AI voice scams have surfaced in various distressing scenarios. For example, families have reported receiving calls from individuals claiming to be their ‘kidnapped’ loved ones, desperately begging for ransom payments. In reality, the supposed ‘victim’ is perfectly safe. However, under the emotional pressure to ensure their loved one’s safety, family members often fail to verify the situation and proceed to pay the demanded fee.
Today, however, our focus shifts to B2B AI voice scams and their impact on businesses.
They typically work through the perpetrators gaining recordings of senior employees, such as the CEO or CFO. Placing only a few words in a specialized program enables the cyber attackers to mimic the person’s voice with scary accuracy, and request payments or information from their unsuspecting colleagues, suppliers, and buyers.
Scammers typically use pressure tactics like urgency and manipulate the victims based on the trust that already exists between the victim and the impersonated individual.
The dangers of synthetic voice technology in fraud
In June 2024, the CEO of one of the largest advertising firms was impersonated by cyber attackers when they used his voice through WhatsApp and Zoom. Although it appeared that Mark Read, CEO of WPP was setting up meetings with agency leaders, it was actually scammers looking to solicit money.
How did they do it?
- The scammers recorded Mark’s real voice and input it into an AI program
- Using just a few words and sounds, the scammers could then type out conversations or speak into a microphone and mimic Mark’s voice
- They set up a Whatsapp profile with the CEO’s photo as a profile picture, and even CC’d in a spoofed email address of another high-ranking employee
- Upon setting up a Zoom meeting, the scammers used the voice cloning software to impersonate Mark off-camera
Thankfully, the agency leaders realised that something suspicious was going on, and the attack was not successful.
However, the threat doesn’t stop with voice AI scams. Deepfake video technology takes deception to the next level by seamlessly stitching a person’s face and voice into fabricated videos, making them appear disturbingly real. This can have devastating consequences, such as tarnishing the reputation of executives, manipulating stakeholders, or spreading disinformation. What makes this even more dangerous is the difficulty in verifying the authenticity of such content, leaving businesses and individuals highly vulnerable to fraud and reputational damage.
Spotting the red flags: how to identify AI-driven impersonation
Consistent and updated training for your employees will help them spot scams like AI voice impersonation. Since technology evolves at a fast pace, it’s important to include any emerging information, popular trends, or new techniques in these sessions.
Employee training should cover the signals that staff should watch out for to indicate a scam, including:
- Robotic noises: this could signal the use of a type-to-talk platform that many AI voice scammers use
- Unnatural pauses: which may indicate typing between sentences
- Pressure tactics: giving the victim no time to think
- Unfamiliar words or phrases: while the voice sounds the same, it’s unlikely that the impersonator knows exactly how your associate thinks and talks so while you recognize the voice, they may sound strange
- Out-of-the-blue calls: unexpected contact from your bank, the government, or an IT support specialist should be viewed as suspicious
Empower employees to raise their suspicions if anything feels off, enabling you as a team to spot the deepfake.
Even with training and checklists, employees have a lot of daily tasks to juggle and may not instantly recognize an AI voice impersonation. Instead, it’s up to companies to build AI voice scam detection into their systems and processes.
Here are some ideas for spotting the red flags early within your business systems and processes:
- Identity verification: use an authentication system, such as two-factor authentication, to automatically validate that the person talking on the phone is indeed who you think it is
- Source verification: verify the phone number (or IP address if it’s a video call) is located where the person should be, and associated with the individual
- 2-step approvals: ensure that before any information is shared or money is sent, requests are approved by a second member of the team who has not been subjected to pressure tactics and may have a clearer mind
Key measures to combat AI voice scams
Combatting AI voice scams starts with proactive prevention. By implementing robust, behind-the-scenes measures, businesses can protect themselves without overburdening employees with the responsibility of identifying suspicious activity.
For instance, Trustpair’s fraud prevention solution leverages instant verification to validate critical information, such as contact details, bank account numbers, and company credentials, before processing payments to suppliers. This approach not only mitigates the risk of fraud but also ensures seamless and secure financial operations, providing peace of mind for your entire organization.
When a supplier has been impersonated by voice technology scammers who attempt to divert the money to their account, these checks will reveal the discrepancy. Trustpair has built-in automation to lock your account from this supplier until the discrepancy is solved, while you can continue to make all other typical payments.
Other key measures to combat AI voice scams include:
- Ask for information about a previous event that the real person would know
- Hang up and call back the person using the number in your contacts (as numbers can be spoofed, appearing legitimate when they are not)
- In the case of banking, use smart indicators like Starling Bank’s call status which reveals in real-time whether their agents are indeed on the phone with you
What businesses need to know about the future of AI voice fraud
The rapid advancements in artificial intelligence over the past few years have transformed how businesses operate, but they have also introduced significant security challenges. AI voice cloning technology is becoming increasingly sophisticated, enabling fraudsters to generate highly convincing synthetic voices and exploit vulnerabilities in financial processes. While the progress in AI recognition software offers hope, businesses must remain vigilant and proactive in identifying threats.
A recent report found that 72% of businesses encountered AI-generated identities attempting to onboard as customers or suppliers last year. This highlights the urgency for organizations to implement comprehensive security measures across all departments. From sensitive data protection to enhanced verification systems, fighting AI fraud requires a multi-layered approach to safeguard funds and prevent fraudulent activities. To stay ahead, businesses should invest in tools that detect and neutralize AI-driven threats in real-time.
Protect your business from AI voice scams
Looking ahead, instant verification will be the cornerstone of protecting your organization against AI-driven fraud. Solutions like Trustpair’s fraud prevention platform provide a robust defense by validating critical information at every step of the payment process.
By combining source verification with automated account locking, businesses can ensure that their hard-earned funds remain out of fraudsters’ reach. Investing in these measures not only safeguards your financial assets but also reinforces trust and security across your operations.