It Looked Exactly Like the CEO
In early 2024, an employee at a Hong Kong-based multinational received a video call from their chief financial officer. The CFO looked normal, sounded normal, and gave clear instructions: transfer HK$200 million (around US$25 million) to several accounts as part of a confidential transaction.
The employee did it. Then they mentioned it to someone else at the company.
The CFO had no idea what they were talking about. The entire call — the CFO’s face, voice, and presence — had been fabricated using AI deepfake technology. The money was gone.
This is not a theoretical future risk. It is happening now, at scale, to businesses of all sizes.
What Is a Deepfake?
A deepfake is a video, audio, or image that has been manipulated using artificial intelligence to make a real person appear to say or do something they never said or did.
The word comes from “deep learning” (the AI technique behind it) combined with “fake.” Early deepfakes required months of computing time and film-quality source footage. Today, free tools available to anyone online can create a convincing deepfake video in minutes using a handful of photos scraped from social media.
What AI can now fake:
- A person’s face on video in real time
- Their voice, mimicking tone, accent, and speech patterns
- A phone or video call where they appear to speak live
- Written messages that match their communication style
The quality varies, but it’s improving faster than most people realise.
How Fraudsters Actually Pull This Off
Step 1: Choose the target
Scammers typically target someone in a position of financial authority — a CEO, CFO, or legal counsel — whose instructions an employee would follow without question. Public figures are ideal because there’s plenty of public video and audio of them available to train the AI.
Step 2: Collect the material
YouTube interviews, earnings call recordings, LinkedIn videos, and conference talks provide hours of natural speech. Some AI voice-cloning tools need as little as 30 seconds of audio to produce a convincing replica.
Step 3: Execute the attack
The fraudster calls an employee — usually in finance, HR, or IT — pretending to be the executive. They create urgency (“this is time-sensitive and confidential”), use authority (“don’t discuss this with anyone else yet”), and give specific, plausible-sounding instructions.
Common requests include:
- Transferring funds to a “new supplier account”
- Sharing employee credentials or system access
- Purchasing gift cards (a classic scam vector)
- Providing personal information for “verification”
Why People Fall For It
These attacks are engineered to bypass rational thinking. Three psychological mechanisms are at work:
Authority. When your CEO tells you to do something, the instinct is to comply. Questioning authority feels risky — especially when the voice and face match.
Urgency. Creating time pressure prevents the target from pausing to verify. “We need this done before the markets open” shuts down cautious thinking.
Secrecy. “Don’t mention this to anyone yet” removes the natural check of consulting a colleague.
These same tactics existed before AI. What AI has done is remove the friction from impersonation. You used to need an actor who looked and sounded like your CEO. Now you need an internet connection and a few photos.
Signs You’re Being Deepfaked
Even sophisticated deepfakes have tells, especially in real-time video:
- Unnatural blinking — AI-generated faces historically blinked rarely or in odd patterns (though this is improving)
- Inconsistent lighting — the fake face may be lit differently from the background
- Mouth not quite matching words — lip sync errors, especially on unusual sounds
- Background artifacts — flickering edges around the face, odd textures
- Voice slightly off — AI voices can sound subtly robotic, especially on unusual words or emotional inflection
But the most reliable defence is not spotting the deepfake — it’s having a process that doesn’t rely on video calls alone.
How to Protect Yourself and Your Organisation
For individuals
Never act on a phone or video request for money or credentials without a second verification. Call the person back on a number you already have for them — not one given in the suspicious call.
Be sceptical of urgency and secrecy. Legitimate executives generally don’t demand that you keep a transaction secret from your own colleagues.
Check for inconsistencies. Does the request match the person’s normal way of communicating? If your CEO normally emails you, why are they calling?
For businesses
Implement a verbal codeword system for financial transactions. A simple, pre-agreed word that employees use to verify unusual requests can stop an attack instantly.
Require dual authorisation for transfers above a threshold. No single person should be able to approve a large transfer based on a single communication.
Train employees regularly. Not once at onboarding — regularly, with real examples. People need to feel empowered to say “I need to verify this” without fearing they’ve offended their CEO.
Use end-to-end verified communication channels. For sensitive internal communications, tools with identity verification are harder to spoof than a standard phone or video call.
A Word on AI Voice Scams Targeting Families
Businesses aren’t the only targets. A fast-growing scam targets ordinary families.
The script: someone calls a parent or grandparent, plays a brief clip of a grandchild’s voice (scraped from social media), and says: “Gran, it’s me — I’m in trouble, I need money, please don’t tell Mum and Dad.” The AI generates the voice. The panic does the rest.
These calls are emotionally devastating and financially ruinous for the victims — often elderly people who have little recourse after money has been transferred.
If you or a family member receives an urgent call like this: hang up and call the person directly on their real number. If they’re truly in trouble, you’ll reach them. If they’re not, you’ve just saved yourself from a scam.
The Bigger Picture
Deepfakes are one part of a broader shift in how AI is being weaponised for fraud. The same technology improving AI assistants, image generators, and voice tools is being adapted by criminals at remarkable speed.
The answer isn’t to distrust technology entirely. It’s to build human processes that don’t rely on a single point of trust — because that point of trust can now be faked convincingly enough to fool almost anyone.
If your organisation needs help building those processes, or wants to assess its current exposure to social engineering attacks, our team is here to help.