4 min read

The Dark Side of AI: Deepfakes

The Dark Side of AI: Deepfakes

One can use AI for good as we currently try to do in the Automotive industry. Use cases like autonomous driving or making the work for production workers more comfortable wouldn't be possible without AI. Let’s look, however, at the dark side of AI in this blog post. What are deepfakes and what do they mean? Deepfakes are media that are created by AI. They appear to be genuine but have limited connection to reality. Deepfakes might be most dangerous in computer security, where they can be used to circumvent authentication or perform high-quality phishing attacks.

Introduction

It’s not surprising that humans have trouble detecting fakes; with the current technology, even shallow fakes are too good. Deepfakes are the logical extension of older AI research. The problem with deepfakes is that simulating an artist’s style collided with the rise of fake news. To this collision, add three more factors: the democratization of AI, the decrease in the cost of computing power, and the phenomenon of virality. That all adds up to a scary picture. Deepfakes might generate real harm in computer security, where they can be used to disable  authentication or be used for executing high-quality phishing attacks.

Deepfakes for good

There are many contexts in which synthetic video can be used for good. Synthesia creates videos with translations, in which video is altered so that the speaker’s movements match the translation. Synthetic video is useful for creating and animating Anime characters; NVidia has used generative adversarial networks (GANs) to create visuals that can be used in video games. In one experiment, synthetic MRI images showing brain cancers were created to train neural networks to analyze MRIs without compromising patient data because the synthetic MRIs don’t match any real person. Another medical application is creating synthetic voices for people who have lost the ability to speak. Project Revoice can create synthetic voices for ALS patients based on recordings of their own voice, rather than using mechanical-sounding synthetic voices.

Policies and protections

The important question is what should be done about deepfakes. So far, social media companies have done little to detect and alert us to fakes, whether they are deep or shallow, but what role should social media companies such as Facebook and YouTube have in detecting and policing fakes? Social media companies, not users, have the computing resources and the technical expertise needed to detect fakes. The future of deepfake fraud will be similar to what we’ve already seen with cybersecurity, which is dominated by “script kiddies” who use tools developed by others, but who can’t generate their own exploits. Fakes coming from “fake kiddies” will be easily detectable, just because those tools are used so frequently.

Media giants like Facebook and Google have the deep pockets needed to build state-of-the-art detection tools. The real problem is that media sites make more money from serving fake media than from blocking it; they emphasize convenience and speed over rigorous screening. Given the nature of virality, fakes have to be stopped before they’re allowed to circulate. And given the number of videos posted on social media, even with Facebook- or Google-like resources, responding quickly enough to stop a fake from propagating will be very difficult. It is also dangerous that online advertising is searching for engagement and virality, and it’s much easier to maximize engagement metrics with faked extreme content.

I have the following four suggestions for regulation:

  1. Nobody should be allowed to advertise on social media during election campaigns unless strongly authenticated–with passports, certificates of company registration.
  2. Declarations of ultimate beneficial ownership: The source and application of funds needs to be clear and easily visible.
  3. All ads should be recorded - as should the search terms used to target people.
  4. Social media companies should not pass any video on to their consumers until it has been tested, even if that delays posting.

Defending against disinformation

What can individuals do against a technology that’s designed to confuse them? There are some basic steps you can take to become more aware of fakes and to prevent propagating them. Perhaps most important, never share or “like” content that you haven’t actually read or watched. When something goes viral, avoid piling on; virality is almost always harmful. It’s important to use critical thinking; it’s also important to think critically about all your media, especially media that supports your point of view. Confirmation bias is one of the most subtle and powerful ways of deceiving yourself. While most discussions of deepfakes have focused on social media consumption, they’re perhaps more dangerous in other forms of fraud, such as phishing. Defending yourself against this kind of fraud is not fundamentally difficult: use two factor authentication (2FA).

If you’re very observant, you can detect fakery in a video itself: people in fake video may not blink, or they may blink infrequently; there may be slight errors in synchronization between the sound and the video; lighting and shadows may be off in subtle but noticeable ways; and other minor but detectable errors.

Conclusion

I don’t expect many people to inspect every video or audio clip they see in such detail but I do expect fakes to get better; therefore, we need to be wary and careful. Above all, though, we need to remember that creating fakes is an application, not a tool. The ability to synthesize video, audio, text, and other information sources can be used for good or ill. The fear is that fakes will evolve faster than we can; the hope is that we’ll grow beyond media that exists only to feed our fears and superstitions.