What is easily the most ambitious deepfake scam yet has taken place in Hong Kong, where attackers were able to convince an employee of an unnamed company to transfer HK$200 million (about $25 million) via a fake video conference populated by simulations of the CFO and other personnel. Fraud involving deepfake audio is increasingly common, but this is the first known scheme of this sort to incorporate fake representations of multiple people.
The simulated employees involved in the heist all had a public presence that the attackers made use of, and were apparently accurate enough in their efforts to make the deepfake scam work. The employee that was targeted reportedly did suspect fraud at first, but nevertheless ended up making a total of 15 bank transfers.
Deepfake scam establishes feasibility of fake video in business compromise schemes
The deepfake scam began with a phishing email from someone purporting to be the company’s chief financial officer (CFO). The employee was initially suspicious of fraud as the message sought to arrange a “secret transaction,” but the deepfake scam team was apparently able to close the deal by pressuring them into a video group chat.
That chat was entirely controlled by the attacker and used deepfake representations of the CFO and other company employees that have a public presence, at least some of whom were personally familiar to the employee. The fraud was not uncovered until a week later when the employee checked in with the company’s head office about the transactions.
Baron Chan, senior superintendent of the Hong Kong police’s cyber security division, said that it does not appear the deepfake scam made use of real-time video for most of the participants. Chan told the media that he believed videos of the participants had been downloaded from other sources, and the attackers used deepfake audio (trained on their prior public appearances) to simulate their voices in real time.
In making a public statement on the incident, Hong Kong police noted that deepfake scam activity is on the rise in their jurisdiction. The police reported at least 20 cases of fraud in the past year in which someone used deepfake video in an attempt to trick a facial recognition system.
No arrests have yet been made in relation to the deepfake scam, and the police say they are continuing to investigate. The company that was targeted has not been publicly identified, but public broadcaster RTHK said that it was a “multinational firm.”
Nick France, Chief Technology Officer at Sectigo, urges cybersecurity teams to take note of this incident: “With deepfake technology, we can no longer trust what we see and hear remotely. Perfectly-written phishing emails, audio messages with the correct tone, and now even fully fake video can be created easily and used to socially engineer into companies and steal money or valuable data and intellectual property. Employees may still assume today that live audio or video cannot be faked, and act on requests they are given seemingly by colleagues or leaders without question – as we have seen in this recent case. Security teams should see this as another threat to their organisations and update their practices and training accordingly. Following best practices for cyber security – adhere to the principles of ‘least privilege’, so employees only have access to the accounts and systems they need to perform their roles. Confirm payments and access to critical data with additional confirmations – even if you know the face on the screen.”
Dr Ilia Kolochenko, CEO and Chief Architect at ImmuniWeb, thinks that Zoom and similar platforms that attackers will make use of in this way will have to play a part in detection and prevention of deepfake scams: “I don’t think that enacting additional laws to regulate deep fakes will be a solution, moreover, in most countries use of deep fakes for illicit purposes is already a criminally punishable offense under the existing laws. What we really need is to add AI-content detection mechanisms to all major social networks and platforms where users can share content, as well as integrating detection of AI-generated content to spam filters, so all non-human content will be visibly marked as such.”
AI-enhanced fraud becoming more effective by the day
AI is already being applied to enhance all sorts of cyber crime and fraud. Scam messages and emails have been the first area to see major enhancement, with attackers able to use tools like ChatGPT to polish text that would usually contain errors that would tip off recipients. Some attackers are also training AI to write malicious code, which both lowers technical barriers of entry and makes it easier to generate custom malware that can slip by automated defenses.
As the Hong Kong deepfake scam illustrates, the biggest advancement thus far has been in fake audio. Tools now exist that can create a convincing replica of someone’s voice from just a few sentences of speech, something easily gathered if the person is a public figure or company representative. Microsoft’s new VALL-E tool can replicate a voice with as little as three seconds of audio, thought not necessarily with as sophisticated a result. This has led to a minor explosion of phone call scams where criminals pretend to be a family member and ask for money for some sort of emergency.
Most of this has developed since the start of 2023. The tools for deepfake scams have been available since 2017, but for years the threat was dismissed as too readily detectable and too minor for most organizations to worry about by many security analysts. That position is changing due to cases such as this, which are demonstrating the huge amounts of money that can be obtained and how far one can go with relatively simple techniques.
Facial recognition systems are proving particularly vulnerable to deepfake scams, to the point that some are questioning the future viability of biometric facial identification. A very recent analysis by Gartner projects that a wide range of biometric verification systems will be rendered unreliable by 2026 due to AI-powered deepfake attacks, and that 30% of organizations will abandon these methods due to inefficiency.
Kevin Vreeland, General Manager of North America at Veridas, offered some thoughts on how organizations can prepare for this new reality going forward: “With the evolution of artificial intelligence and increased identity-based security threats, companies must implement updated and improved methods of verification and authentication. These measures should focus on detecting the liveness and proof-of-life of their employees. Currently, there are companies developing biometric solutions focused on how to face the new forms of fraud, through a robust biometric engine and aligned to quality and security certifications, such as NIST and iBeta. It’s also important that companies educate their employees about the dangers of deepfakes similar to other types of scams. Deepfakes usually contain inconsistencies when there is movement. For example, an ear might have certain irregularities, or the iris doesn’t show the natural reflection of light.”
Patrick Harr, CEO at SlashNext, notes that it is inevitable that physical security and cybersecurity will have to merge to tackle this particular issue: “This was an elaborate crime. There are ways to apply cybersecurity protection to thwart these types of phishing on collaboration tools like Teams, Slack and Zoom. However, it needs to be combined with physical security protocols and training, because these types of crimes are morphing and technology is lagging behind. There should be multiple approval levels before money is transferred, even when the CFO is requesting the transfer. Companies can require all corporate video communications happen on approved collaboration channels that are secure and employees should be trained to question unusual behavior like requests to use new bank accounts or requests that seem out of the usual process.”