Deepfakes and Cybersecurity: How Much of a Threat Are They?

The ability to digitally insert actors into films has been available since the mid-1990s, when it was first used to finish The Crow after the tragic on-set death of lead actor Brandon Lee. Techniques to do this in a very realistic and natural-looking way have been available for years now, so why is there such a panic brewing over deepfakes?

Before deepfakes, this sort of thing required expensive CGI software and highly specialized knowledge that is limited to a relative handful of digital effects studios. Deepfakes employ artificial intelligence and allow anyone with a decent computer to make their own realistic fake videos starring just about anyone in the world, working only from a set of images or videos of their target.

How deepfakes got started

Deepfakes are a relatively new phenomenon, first starting to emerge on the internet in late 2017. They may initially seem like something cooked up by a shadowy intelligence agency, but they’re actually the invention of one anonymous Reddit user who publicly posted their work.

It’s not something that they invented from scratch, however – their work is simply built on Google’s open-source TensorFlow machine learning library. Deepfake algorithms use deep learning AI to replace one face with another. This is basically done by comparing facial positions and using this information to substitute in frame-by-frame replacements that automatically conform to the dimensions and conditions of the output video.

Before you continue reading, how about a follow on LinkedIn?

Not surprisingly, the earliest use of this AI learning technique was to create fake porn. A good work-friendly example of deepfakes in action is available from director and comedian Jordan Peele, who used Barack Obama as a subject for a PSA he released in April of 2018. In the video, Obama appears to say a number of ridiculous things before delivering a warning about fake news.

That explains the face-swapping video, but what about the audio in the Obama video? This is something that Peele was also involved with at an earlier date. In 2016, he served as a celebrity host for Adobe’s public reveal of their new VoCo audio editing tool. VoCo is to audio what the deepfake algorithm is to video. Naturally, deepfaked video and audio can be paired together to create a more convincing video as Peele did with his Obama PSA.

Deepfakes as disinformation?

The implications of this are definitely disturbing. However, as you can see from the Obama video, deepfake videos are still far from perfect. They have moments where they look frighteningly real, but the entirety of any given video also usually has lots of small glitches and imperfect facial matches that make clear it is a fake. And as with traditional forms of animation, getting mouth movements to sync up with speech just right is the hardest part.

Fortunately for global stability and security professionals, most of the first year of this technology has been limited to the creation of deepfake porn by its enthusiasts. Some minor political uses of deepfakes have sprung up, but are constrained by the current technical limitations. For example, a video of Parkland High student and gun control activist Emma González was deepfaked – but the target was not her face. Instead, the fakers changed a shooting range target she ripped up into the United States Constitution.

Assessing the deepfake security threat

At present, deepfakes are too easily detected by an untrained eye to be a significant security threat. The technology is always improving, however. At the moment, the greatest worry is the use of it by state-sponsored actors that have the resources to create the most convincing possible videos. The real threat begins when anyone with a modern computer can create highly realistic manipulated videos at the push of a button.

Countermeasures are already being developed in anticipation of this future state of affairs. For example, DARPA believes that these videos can develop into a serious national security problem and is developing new video forensics tools to detect them.

Deepfakes disproportionately affect public figures, because a large collection of shots of their faces from various angles is needed to create the facial models that are swapped into the fake video. Should flawless deepfakes become common, it is likely there will be a rise in pre-release authentication of content created by these figures with some sort of digital watermark. That would not address every type of deepfake, but it could be an effective countermeasure against the creation of false statements by political figures and celebrities.

All of this addresses widespread, high-profile threats. But what about the potential dangers to companies? There is some concern about targeted phishing attacks to gain access to business networks. Management figures may have enough video and audio data in circulation to serve as a base to create a deepfake.

There are assorted dangers to individuals as well. For example, a client might be tricked into revealing sensitive information by what appears to be an audio or video message from their lawyer. Law enforcement figures might also be deepfaked in this manner to exploit the trust of citizens. Attempts at blackmail of targeted individuals are also quite possible so long as they have enough photos or video of themselves on social media to have their faces swapped into another video.

Exploiting the fake news vulnerability

Deepfakes don’t so much create a new problem as threaten to amplify an existing problem: the inability of a significant portion of the public to evaluate the credibility of news and information sources, as demonstrated by miscellaneous fake news issues on social media. Even a very well-made deepfake would be much less of a threat if there was not such a widespread tendency to uncritically accept any information passed through a big-name social media channel, particularly if that information confirms the recipient’s existing biases.

As the linked Wired article points out, it’s very difficult to address the root causes of uncritical thinking – it’s hard to convince people who aren’t inclined to engage critically to do so, and it’s hard to disentangle people’s information processing from their biased ideological core beliefs.

The onus for defense against deepfakes is thus likely to fall heavily on information gatekeepers. Companies of an appropriate size may find they need to have a dedicated staff member or even a small team devoted to authenticating company media and evaluating outside sources of information before they are disseminated or used to make business decisions.

 


Leave a Reply

Please Login to comment
  Subscribe  
Notify of

Follow CPO Magazine