Woman looking doubtful while using tablet showing how deepfakes could break the internet
Deepfakes Could Break the Internet by Jennifer Baker, EU Policy Correspondent

Deepfakes Could Break the Internet

We’ve heard the threat before, but the emerging technology is genuinely worrying

We could already be swimming in a world of deepfakes and the public wouldn’t even know. After all, how would we? Those being manipulated by microtargeting before the last US presidential election or the Brexit referendum only found out too late that their information had been weaponized against them. Like the frog being boiled, we don’t notice the water getting hotter until it’s too late.

The terrifying prospect of a world of so-called deepfakes, where video is falsified so effectively that it is impossible to tell if it is true or not, is already at hand. “Thanks” to advances in machine learning, CGI, and facial mapping technology, deepfakes are not only possible, but probable.

Never one to miss an opportunity, Google is helping to develop a system that can detect deepfake videos … by creating deepfakes! In a blogpost on 24 September, Google Research said it was taking the issue seriously.

Google created a large dataset of 363 real videos of 28 consenting actors and an additional 3,068 manipulated videos that will be used by researchers from the Technical University of Munich, the University Federico II of Naples and University of Erlangen-Nuremberg FaceForensics project.

Deepfakes use Generative Adversarial Networks (GANs) – a set of algorithms that can create new data from existing datasets.  Quite obviously the implications are alarming. As well as manipulating politicians’ images, the move also heralds a new and disturbing form of revenge porn. But the threats to society extend beyond the technical. In a world where you can’t believe your eyes, how do you know what’s real? How can you prove that your video is NOT a deepfake.

A recent report from cybersecurity company Deeptrace Labs found mixed results: although it didn’t detect any instances in which deepfakes were actually used in disinformation campaigns, it did emerge that the knowledge that they could be used had a powerful effect.

“From the perspective of the news business, the ability to trust or indeed identify disinformation in third party content is crucial,” explained Nick Cohen, Reuters head of video products. Reuters therefore set up its own deepfake experiment, using flesh and blood humans!

The experiment identified certain red flags to help detect deepfakes: audio to video synchronization issues; unusual mouth shape, particularly with sibilant sounds; a static subject. But as the technology improves, humans will be increasingly unable to trust the evidence of their own eyes.

Using AI to detect deepfakes is not yet widely possible, so policymakers are starting to take note – primarily in response to fears that their own campaigns could be derailed. But despite some noise about disinformation, so far no binding rules have been put forward. The European Union’s Code of Practice on disinformation does not even mention the issue.

In June, then-MEP, Marietje Schaake, called for a parliamentary inquiry into the role of technology companies on democracy. “Self-regulation has proven insufficient,” she said. “Not a day goes by without new information about malicious actors (ab)using tech platforms, undermining democracy, without technology companies taking sufficient action. The companies’ business models, as well as the use of botnets and deepfakes, further impacts access to information. Oversight and accountability urgently need to improve.”

She suggested a committee should hear experts under oath to uncover the many unknown details of the workings of technology platforms and how they impact democracy and elections.

The assumption is that deep fakes will primarily be deployed to derail democratic institutions. And that’s not an unreasonable assumption for a number of reasons. Firstly, as previously mentioned, the very fact of deepfakes has a destabilizing effect, something that certain foreign actors actively welcome. Secondly for those already in power, videos of wrongdoing can be dismissed as “deepfakes,” providing the cover of deniability. But lastly and most importantly, creating really good deepfakes is expensive.

Despite online sites that offer “cheapfakes” for humorous at-home use, only those with significant resources can create long-term convincing videos and the high costs and technical barriers aren’t going away anytime soon. Politicians (and celebrities) are also desirable targets because there is so much footage and imagery of them available, something useful in creating a believable deepfake.

And therein lies some of the armour we everyday mortals can deploy to avoid being “deepfaked.” As with so much protection in daily life, it involves NOT giving too much of your personal data away – limit the number of videos and images of you publicly available.

Businesses should also be wary. According to reports, criminals have successfully stolen millions of dollars by mimicking CEOs’ voices with an AI program “that had been trained on hours of their speech – culled from earnings calls, YouTube videos, TED talks, etc.” Other risks include leaking a deepfake of a CEO saying something that could send stock prices tumbling.

Politicians and celebrities are desirable targets of deepfakes as there is so much footage and imagery of them available. #fraud #respectdataClick to Tweet

In time, convenient detection technology may emerge, or law-makers might find some way to deter the practice. In the meantime, don’t annoy anyone who might have the time, know-how, resources and thirst for revenge.