A blogpost from LastPass Labs warns of an attempted voice phishing attack on an employee that made use of an audio deepfake of company CEO Karim Toubba. The attacker peppered the LastPass employee with a series of calls, text messages and at least one voice mail message, though the employee recognized it as a scam attempt and no damage was done.
Though the attempt was ineffective, the incident highlights the more frequent and more sophisticated use of AI tools in social engineering attacks. In this particular case the failure of the attackers was not necessarily due to lack of technical prowess in creating a deepfake, but because they immediately raised red flags by attempting to contact the employee via WhatsApp and during unusual hours.
Audio deepfakes of public figures increasingly easy to make, can be extremely convincing
The LastPass employee was targeted with an assortment of voice calls and messages that employed deepfake audio, but did not respond to them due to the unusual circumstances. As is true at most companies, internal communications are not handled via WhatsApp. Additionally, the voice phishing messages came in at odd times well outside of business hours.
LastPass did not comment on the skill level or apparent authenticity of the deepfakes, but the ability to craft convincing voice phishing messages is well within the ability of any would-be cyber criminal. Tools that simplify the process are readily available, some that require relatively small samples of a target’s voice that can be easily obtained from their public appearances. This has already led to some major heists being pulled off, the biggest of which was a $25 million theft from a Hong Kong company that took place early this year. In that incident the attackers appeared to use recycled video of a Zoom call combined with newly created deepfake audio to dupe an employee.
The approach is becoming acute enough that the U.S. Department of Health and Human Services (HHS) issued a public warning about it earlier this month, noting that it is particularly common for hackers to call company IT desks and pretend to be from the financial department. The attackers will sometimes use audio deepfakes of an employee to add authenticity to the approach, claiming that a company smartphone is no longer working and asking IT to approve a new device for MFA verification.
The LastPass incident highlights some of the common hallmarks of voice phishing approaches; not just odd channels of communication and odd hours, but an unusual sense of pressure with lots of rapid messages and demands to grant permissions or issue payments immediately without a chance to review the request. It is important for employee security awareness training to incorporate this new wrinkle.
Toby Lewis, Global Head of Threat Analysis at Darktrace, expands on how the incorporation of AI has created a paradigm shift in scams of this type: “The prevalence of AI today represents new and additional risks. The ability to use artificial intelligence to generate more convincing emails is a growing concern, giving attackers the power to increase the effectiveness of their targeting campaigns. Arguably, however, the more considerable risk is the use of generative AI to produce deepfake audio, imagery, and video, which can be released at scale to manipulate and influence the electorate’s thinking. While the use of AI for deepfake generation is now very real, the risk of image and media manipulation is not new, with “photoshop” existing as a verb since the 1990s. The challenge now is that AI can be used to lower the skill barrier to entry and speed up production to a higher quality. Defense against AI deepfakes is largely about maintaining a cynical view of material you see, especially online, or spread via social media.”
Voice phishing only expected to get worse
As AI tools improve, so will the ability of criminals to create deepfakes. Unfortunately, it appears that human capability to detect voice phishing attempts is not going to keep pace with technological development, at least if recent studies are to be believed. Research conducted by an international team in late 2023 suggests that people can only be expected to spot audio deepfakes about 73% of the time, and that is with today’s technology.
News of a breach at LastPass is always a hair-raising event for those that have stuck with the company through its recent security incidents. The biggest of these was a late 2022 breach that involved the theft of password vaults impacting about 25 million users, followed up by news in September 2023 that crypto holders were being targeted for password cracking and that at least $35 million in crypto thefts over the prior year could be tied back to the breach. LastPass implemented a 12-character minimum requirement for master passwords in 2018, but “grandfathered” in previously existing accounts and did not require them to update their current passwords. Older users may have also missed out on the addition of security “iterations” that improve the defenses of newer users. This problem is particularly acute as LastPass has been in business since 2008 and a significant portion of its user base has been with it for a decade or more.
Nick France (Chief Technology Officer at Sectigo) notes that audio deepfakes are not just a voice phishing threat, but are also making business email compromise (already a serious thorn in the sides of many organizations) an even tougher attack category to deal with: “When AI-generated deepfake attacks are well targeted, they can be quite effective, not least because AI and deep-fake technology are greatly improved today. The nature of the attacks hit on social pressure to act, which, combined with the current post-pandemic work situation, takes advantage of people not being in close proximity up to five days a week like they were before. These attacks are an evolution of BEC. However, rather than trying to get someone to click a fake link, they add personal and direct pressure outside of email (voice, SMS, video) to get the employee to do something they shouldn’t or wouldn’t normally do. With deepfake technology, we can no longer trust what we see and hear remotely. Perfectly written phishing emails, audio messages with the correct tone, and now, as we see in this example with LastPass, even fully faked videos can be created easily and used to socially engineer into an organization. Even with all of the stories available today of people being scammed, employees may still believe that live audio or video cannot be faked and act on requests they are given seemingly by colleagues or leaders without question – as we have seen in this recent case. Security teams should see this as another threat to their organizations and update their practices and training accordingly. Following best practices for cybersecurity and adhering to the principles of least privilege will ensure employees only have access to the accounts and systems they need to perform their roles.”
Krishna Vishnubhotla, Vice President of Product Strategy at Zimperium, believes that the next big evolution of voice phishing and deepfakes will be pairing the attacks with malware that automatically accesses victim phone contacts: “The danger of audio and video deep fakes will escalate when malware on our phones accesses our contact lists to send misleading SMS messages, making it appear as though they are from a “known contact’ on your device. This will apply to messages and voicemails, too. When they get this right, all the visual cues on the phone will work against you to avoid any doubt. We already see malware accessing contacts from devices to spread further, so this is not that far-fetched if you think about it. This sophisticated form of impersonation would exploit the trust we place in recognized numbers, whether the communication occurs through SMS, WhatsApp, or other messaging platforms, significantly amplifying the threat’s credibility and potential impact. Mobile devices are particularly vulnerable to phishing attacks due to the limited built-in security measures, which struggle to combat the increasingly sophisticated nature of these persistent threats. It’s crucial for organizations to recognize the severity of mobile-based threats and integrate mobile security into their current endpoint solution.”