Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Former Defense Contractor Boss Gets 7+ Years for Selling Zero Days

February 25, 2026

CISA Flags Four Security Flaws Under Active Exploitation in Latest KEV Update

February 25, 2026

44% Surge in App Exploits as AI Speeds Up Cyber-Attacks, IBM Finds

February 25, 2026
Facebook X (Twitter) Instagram
Wednesday, February 25
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Tips and Advice»How to tell if a voice call is AI or not
Tips and Advice

How to tell if a voice call is AI or not

Team-CWDBy Team-CWDFebruary 23, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Can you believe your ears? Increasingly, the answer is no. Here’s what’s at stake for your business, and how to beat the deepfakers.

There was a time when we could believe everything we saw and heard. Unfortunately, those days are probably long gone. Generative AI (GenAI) has democratized the creation of deepfake audio and video, to the point where generating a fabricated clip is as easy as pushing a button or two. This is bad news for everyone, including businesses.

Deepfakes are helping scammers bypass Know Your Customer and account authentication checks. They can even enable malicious state actors to masquerade as job candidates. But arguably the biggest threat they pose is financial/wire transfer fraud and the hijacking of executive accounts.

Organizations underestimate the deepfake threat at their peril. The British government claims that as many as eight million synthetic clips were shared last year, up from just 500,000 in 2023. The real figure may be far higher.

How attacks work

As an experiment by ESET Global Security Advisor Jake Moore has also shown, it’s never been easier to launch a deepfake audio attack on your business. All it requires is a short clip of the victim to be impersonated. GenAI will do the rest. Here’s how an attack might proceed:

  1. An attacker selects the person they’re going to impersonate. It might be a CEO, a CFO or even a supplier.
  2. They find an audio sample online – which is quite easy for high-profile executives who regularly speak in public. It might come from a social media account, an earnings call, a video/TV interview or any number of other sources. A few seconds of footage should be enough.
  3. They select the person to call. This might require some desk research – usually scouring LinkedIn for IT helpdesk staff, or finance team members.
  4. They might call the individual direct, or send an email in advance – for example, a CEO requesting an urgent money transfer, a password/multi-factor authentication (MFA) reset request, or a supplier demanding payment for an overdue invoice.
  5. They call the pre-selected target, using GenAI-generated deepfake audio to impersonate the CEO/supplier. Depending on the tool, they may stick to pre-scripted speech, or use a more sophisticated “speech-to-speech” method where the attacker’s voice is translated in near real time to that of their victim.

Hearing is believing

This type of attack is getting cheaper, easier and more convincing. Some tools are even able to insert background noise, pauses and stammers to make the impersonated voice sound more believable. They’re getting much better at mimicking the rhythms, inflection and verbal ticks unique to every speaker. And when an attack is launched over the phone, AI-related glitches may be harder for the listener to pick up.

Attackers may also use social engineering tactics, such as creating pressure on the listener to respond urgently to their request, in order to achieve their goals. Another classic is to urge the listener to keep the request confidential. Add to that the fact that they’re often impersonating a senior executive, and it’s easy to see why some victims are duped. Who would want to get into the CEO’s bad books?

That said, there are ways for you to spot a faker. Depending on how sophisticated the GenAI they’re using is, it may be possible to discern:

  • An unnatural rhythm to the speech of the speaker
  • An unnaturally flat emotional tone to the voice of the speaker
  • Unnatural breathing or even breath-free sentences
  • An unusually robotic sound (when they use less advanced tooling)
  • Background noise which is either strangely absent or too uniform

Time to fight back

The reason threat actors are putting more of their time into scams like these is simple: the potential rewards on offer. Cautionary tales are steadily accumulating. One of the biggest blunders came way back in 2020, when an employee at a firm in the UAE was tricked into believing that their director had phoned to request a $35m fund transfer for an M&A deal.

Given that deepfake technology has improved significantly in the six years since, it’s worth revisiting some key steps you can take to minimize the chances of a worst-case scenario.

It should start with employee training and awareness. These programs should be updated to include deepfake audio simulations to ensure staff known what to expect, what’s at stake and how to act. They should be taught to spot the tell-tale signs of social engineering and typical deepfake scenarios such as the ones described above. Red teaming exercises should be run to test how well employees are absorbing this information.

Next comes process. Consider the following:

  • Out-of-band verification of any phone-based requests – i.e., using corporate messaging accounts to check with the sender independently
  • Two individuals to sign off any large financial transfers or changes to supplier bank details
  • Pre-agreed passphrases or questions which executives must answer to prove they are who they say they are over the phone

Technology can also help. Detection tools exist to check various parameters for the presence of a synthetic voice. Harder to implement but another course of action would be to limit the opportunities for threat actors to get hold of audio, by limiting executives’ public appearances.

People, process and technology

However, the bottom line is that deepfakes are simple and cost little to produce. Given the potentially huge sums up for grabs for the fraudsters, it’s unlikely that we’ll see the end of voice cloning scams any time soon. A three-pronged approach based around people, process and technology is therefore the best option your organization has to mitigate the risk.

Once a plan has been approved, remember to regularly review it so that it stays fit for purpose, even as AI innovation advances. The new cyber-fraud landscape demands constant attention.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleFraud Investigation Reveals Sophisticated Python Malware
Next Article Study Uncovers 25 Password Recovery Attacks in Major Cloud Password Managers
Team-CWD
  • Website

Related Posts

Tips and Advice

Is Poshmark safe? How to buy and sell without getting scammed

February 19, 2026
Tips and Advice

Is it OK to let your children post selfies online?

February 17, 2026
Tips and Advice

Top IRS scams to look out for in 2026

February 10, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

Watch out for SVG files booby-trapped with malware

September 22, 2025

Is it time for internet services to adopt identity verification?

January 14, 2026

Your information is on the dark web. What happens next?

January 13, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.