By Martin Greenfield CEO – Quod Orbis
As the CEO of a cyber security company, like so many before me, it was only a matter of time before I, myself, ended up with a target on my back.
However, the story I have to tell is not isolated to my industry. I’m sharing it to draw attention to a particular method that is growing in relevance and use, in the hope that it will help other CEOs fend off similar attacks – regardless of where they work.
The rise of artificial intelligence (AI) has provided the perfect breeding ground for deepfake spear phishing; a new, sophisticated cyber-attack that all CEOs need to be aware of and understand the detrimental effects it could have on their business. Hopefully my experience will drive awareness and encourage organisations to implement robust measures to stop such an attack before it is too late.
Before I jump in, the reasons why an organisation may be targeted are multi-faceted, one such motivation being a recent acquisition. This is why I (and Quod Orbis) was targeted. During this period, companies find themselves dealing with new businesses and new people. Ultimately, any changes can leave you open to attack as teams’ guards are down, and new or unknown communications from external sources do not feel so unusual.
What is deepfake spear phishing?
Deepfake spear phishing is a targeted cyber-attack where scammers use AI-generated deepfake content—such as audio, video, or images—to impersonate an individual that the victim trusts, like a colleague or senior executive. The goal is to deceive the target into revealing sensitive information, transferring money, or granting access to systems. It combines the personal targeting of spear phishing with the convincing realism of deepfake technology, making it a more sophisticated and dangerous form of attack.
Deepfake spear phishing in practice
The attempted attack I experienced followed a particular format that aimed to build trust, using a multi-channel approach of AI voice notes, WhatsApp messages and credible sources to verify information. It was incredibly cunning and took advantage of Quod Orbis’ recent acquisition, impersonating the Chairman of the acquiring Company and asking me to “assist” them with another acquisition.
Here’s how the exchanges went.
Step one: Building the credibility
I received a series of WhatsApp messages from the “Chairman” asking for my help. They asked how I was, my availability that day and for my assistance in an important matter for their company. The attacker clearly tried to build credibility by asking me if someone I had never heard of before had already contacted me about this.
I obviously stated that I had not received previous communications, and did not recognise the name, so the Chairman asked me to email her with their personal email address on copy. The address provided for this new individual suggested she was from a prestigious law firm.
The messages from the Chairman continued to this effect:
“We are in the process of buying an overseas company which will enhance our market position that I have approved and for which I need your help. The case in question is of utmost importance for the group and is strictly confidential. This operation is controlled by the Financial Markets Authority who imposes us to work in conjunction with one of our subsidiaries and that is the reason why I have chosen you. In order for this company to become part of the group, a down payment of the total amount of the acquisition is required.”
The icing on the cake
This was further substantiated by a convincing voice note from the “Chairman” themselves that lasted around eight seconds explaining they would substantiate the help requested once I had signed an NDA. Let me emphasis to you that these voice notes were convincing
Red Flags here –
- Being asked to copy the Chairman in on their private email
- Being asked to contact someone I had not dealt with or heard of before
- Attempt to build credibility by referencing the FMA and the company’s need to work with a subsidiary
- Request accompanied by voice notes for credibility but short in content
Step two: Escalate the credibility
I was asked by the “Chairman” to email the “Lawyer” and sign an NDA – again, attempting to reassure me of the validity of this request.
So I duly exchanged emails with the Lawyer who explained that once the NDA was signed, I would be informed of my role in this acquisition. Now, let’s be clear, I performed extensive checks on this law firm beforehand and the company appeared legitimate – it had a comprehensive website and the Lawyer herself was listed as an employee at this particular firm.
The Lawyer confirmed that she would send me “the NDA to sign in a couple of minutes so I can share the third party’s details to proceed with securing the offer. Once the NDA is signed, I will send you an email with an overview of what is needed.”
Two hours passed and I heard nothing. When I chased the Lawyer, she sent over the NDA via mobile, which I signed.
Step three: Go for the killer question
Once the NDA was signed, I was immediately emailed by the Chairman, along with another voice note stating:
“Martin, now the NDA is signed let me summarize the situation for you. I need you to proceed with our first deposit payment today to secure and lock in our offer.
Again this must remain strictly confidential. This is a minority stake, which means we are buying shares in a company that will improve our position in the market.
I think you understand the importance and responsiveness required to successfully complete this operation.
We agreed to transfer between 10% to 15% of the total acquisition value.
10%: 550,000
15%: 825,000
I will schedule a return of funds next week after the public announcement.
Could you please provide me with the available amount so that Josephine (the alleged Lawyer) can share the third party’s banking details to proceed.”
The accompanying voice note was 3 seconds, thanking me for my help and professionalism from the “Chairman” himself.
And there it was: the killer question asking me to transfer a considerable amount to an unknown source.
Red Flags Here:
- Transferring money to an unknown source – particularly from a Company that did not need to ask me to do this from a financial standpoint
Step four: BUSTED
It then became very obvious that this exchange wasn’t what it seemed.
It then became obvious that even down to the detail of the Whatsapp messages and most importantly the voice notes were as a result of AI.
Think about it
It is incomprehensible why a company that had just acquired the business would then be requesting me, as the CEO, to transfer that amount of money for them to purchase a stake in another company.
Big red flag.
But this is how CEOs are tricked. The reality is that we’re busy people, focused on driving our business strategy. Criminals will take advantage of any situation – in my case, a recent acquisition and therefore a new owner.
Be on the look out
So what are the red flags of a deepfake spear phishing attack? Not all of these happened to me, so you could experience some or a combination of all:
- Unusual or urgent requests: The message may come from a trusted source (like a boss or colleague) but contain unusual, urgent, or out-of-character requests, such as asking for immediate financial transfers or confidential information.
- Inconsistent communication patterns: There may be subtle differences in tone, language, or behaviour compared to usual interactions. For example, a person who typically uses formal language may suddenly become casual, or vice versa.
- Visual or audio artifacts: In deepfake videos, there may be slight inconsistencies, like unnatural facial movements, mismatched lip-syncing, or strange blinking patterns. In audio, the voice might sound slightly robotic or lack natural intonations.
- Email spoofing indicators: You may be asked to not communicate via the usual email address or a personal one.
- Unfamiliar file attachments or links: The message may include unfamiliar file formats or links that look legitimate but redirect to malicious websites or download malware.
- Unusual timing: The message may be sent at an odd time for the sender, like late at night or during holidays, raising suspicion about its authenticity.
- Inconsistencies in video/audio calls: In real-time impersonations, a video or voice call might have a slight delay, glitches, or artifacts in the video or audio feed that seem unnatural.
- Requests for untraceable payments: Attackers may ask for payments through unconventional or untraceable methods, such as cryptocurrency, which can be a red flag.
- Mismatch in content and context: The content may not fully align with previous conversations or ongoing projects, signalling it may not be from the genuine source.
- Requesting confidential information without normal protocols: If someone is asking for confidential information without following established procedures, it could be a sign of a deepfake attack.
Here’s what you can do
Below are some of the protocols you can implement to verify that you are communicating with the right person, and not a cyber criminal.
- Establish a verification process for sensitive requests
- Why: Deepfake attacks often mimic trusted individuals to request financial transfers or sensitive information.
- How: Mandate that all unusual or sensitive requests (e.g., wire transfers, sharing confidential info) go through a secondary verification, like a phone call to confirm identity.
- Regular employee training on phishing
- Why: Human error is a common vulnerability. Well-trained employees can be the first line of defence.
- How: Conduct regular training on identifying phishing attacks, including deepfakes, and implement phishing simulations to test employee awareness.
- Adopt a Zero-Trust security model
- Why: Zero-trust security assumes that every user, device, and system must be verified before gaining access to company data, reducing attack surface.
- How: Enforce strong identity verification for every access request, segment your network, and restrict access based on necessity and roles.
- Use AI-powered phishing detection tools
- Why: AI can detect anomalies in communication that might be missed by traditional systems.
- How: Deploy AI-driven tools that scan emails, voice, and video communications for unusual patterns, behavioural anomalies, or deepfake characteristics.
- Take a look at our AI powered CCM platform here
- Enforce strict communication policies
- Why: Deepfake phishing often exploits casual communication channels.
- How: Limit the use of unofficial or personal communication platforms (e.g., social media, private messaging apps) for business purposes and ensure encrypted communication for sensitive conversations.
- Strengthen email security protocols
- Why: Spear phishing often begins with malicious emails. Implementing email security measures helps filter out fake or malicious content.
- How: Deploy protocols like DMARC, DKIM, and SPF to authenticate emails and reduce the chances of receiving phishing emails.
- Set up incident response protocols
- Why: A quick and effective response to a suspected attack can limit damage.
- How: Develop a formal incident response plan, including steps for identifying, containing, and mitigating deepfake spear phishing attacks, and ensure that employees know how to report suspicious activities.
Now is the time for vigilance. We all need to acknowledge that AI is being leveraged by cyber criminals to target organisations, with CEOs usually the intended victim. And trust me its convincing… really convincing.
To hear more about my story, you can watch my recent feature on BBC Click with BBC Journalist Joe Tidy. If you would like to take a look then you can access this here.