Every CISO worth his or her salt is closely watching a new emerging cybersecurity threat with the potential to significantly impact every organization: deepfake technology. In what’s considered one of the first successful uses of deepfake technology by cyber criminals, it was reported in early 2024 that a finance employee for a multinational company was scammed out of $25 million by a deepfake impersonation of a corporate executive during a video conference. It’s a harrowing story, and the perfect opportunity to discuss what deepfakes are, how they work, their creation process, and whether current IT security countermeasures are up to the task of combating this growing threat.
What is a Deepfake?
A deepfake is a synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using advanced artificial intelligence (AI) and machine learning technologies. These convincing fake videos and audio recordings have become increasingly sophisticated, making it difficult to distinguish between what’s real and what’s fabricated.
How Does a Deepfake Work?
Deepfake technology leverages AI algorithms, specifically deep learning methods to analyze and learn the characteristics of a person’s face and voice. It then superimposes these learned characteristics onto another individual in videos or audio recordings, creating a realistic, but entirely fabricated, representation.
How to Create a Deepfake
Creating a deepfake involves feeding vast amounts of video and audio data into these AI models, allowing them to understand and mimic the target’s facial movements, voice, and expressions accurately. This process requires significant computational power and sophisticated software, but as technology advances, it’s becoming more accessible to a broader audience, including cyber criminals.
What are Some Deepfake Security Concerns?
Deepfakes pose significant challenges for businesses, especially as deepfake technology becomes more accessible to cyber criminals. A recent report from U.S Cybersecurity and Infrastructure Security Agency (CISA) outlined the threat deepfakes pose, noting this threat increases as criminals will become more sophisticated in how they apply their deepfakes. Generally speaking, most CISOs and IT security leaders should be on the lookout for the use of deepfakes in the following criminal activities:
- Impersonation in Business Communications: The ability to impersonate CEOs or other key executives in video calls or audio messages can lead to unauthorized transactions and decisions.
- Financial Fraud: Deepfakes can be used to manipulate stock prices or commit financial fraud by impersonating financial leaders or creating fake news.
- Data and Identity Theft: By impersonating trusted individuals, attackers can trick employees into divulging sensitive information or credentials.
How to Spot a Deepfake
Spotting a deepfake can be challenging, requiring vigilance and education of their common flaws. Given the current state of deepfake technology, look for these telltale signs:
- Unnatural Eye Movement: Deepfakes often struggle to accurately replicate natural eye movement. A famous real-world example of this was the legacy Star Wars character of Luke Skywalker appearing in a recent Disney+ series. Although original actor Mark Hamill had aged out of his youthful Luke Skywalker role, Disney used deepfake technology to superimpose Mark Hamill’s face from 1983 onto a much younger stuntman’s head in 2021. Viewers noted that the deepfake Luke Skywalker suffered from a “dead stare” that did not look like a real person’s eyes:
- Unnatural Facial Expressions: The AI might not perfectly capture the subtleties of human expressions, leading to anomalies in facial movements. For example, imagine the head attorney in your company’s Legal department calls you on Microsoft Teams and every few seconds you notice her smiling oddly and opening her eyes a bit too wide at random intervals during your conversation.
- Facial Morphing: Look for glitches or inconsistencies in the face, where the person’s face might momentarily warp or blend incorrectly on the screen. The effect will be strange and unsettling, most movie fans will likely recognize it as “a glitch in The Matrix” rather than a problem with the person’s network connection.
While humans can be trained to spot common signs of a deepfake today, the technology will improve and spotting these signs will become increasingly difficult. It is more likely than not that in the future, CISOs and other IT leaders will need advanced detection tools and strategies that will use AI to find deepfakes. Deepfake security is in its infancy, but undoubtedly will mature with the threat.
The question is not if deepfake technology will be used against your organization, but when. As these threats continue to evolve, staying one step ahead becomes paramount. After all, no organization wants to be faked out or worse, become the victim of a cyber attack.