What are deepfakes?

Understanding deepfakes: Learn the creation process, recognize their risks, and discover how to protect against them, ensuring the security of your digital identity and information.

Dark Web Scan

For many decades, filmmakers have sought to change or enhance human biology with makeup, prosthetics, and body doubles to make their scenes feel more authentic. But until modern visual effect (VFX) technology, the results were a mixed bag. From films like The Irishman and Captain Marvel to Avengers: Endgame, VFX allows performers to overcome physical impossibilities. Sometimes, Hollywood VFX allows filmmakers to merge the face of actors with the bodies of other performers for roles they’re not ready for or capable of performing.  

But as you can imagine, modern Hollywood VFX technology is expensive, delicate, and detailed work. Clearly, not everyone has a budget of a few hundred million USD to hire a VFX studio. Here is where deepfake technology comes into the picture.

What is a deepfake? A definition

A “deepfake” refers to recreated media of a person’s appearance and/or voice by a type of artificial intelligence called deep learning (hence the name, deepfake). A Reddit user who shared deepfakes on the site coined the term in 2017.

Deepfakes are typically fake images, videos, or audio recordings. You may have seen popular videos of celebrities or politicians saying something they are unlikely to say in real life. These are common examples of deepfakes. The emergence of deepfake videos raises concerns about a potential ‘liar’s dividend,’ where misinformation proliferates, undermining trust and distorting reality.

How do deepfakes work?

Deepfake technology leverages sophisticated artificial intelligence through generative adversarial networks (GANs), comprising two critical algorithms: the generator and the discriminator. The generator initiates the process by creating the initial fake content based on a desired outcome, using a training dataset. Meanwhile, the discriminator evaluates this content’s authenticity, pinpointing areas of improvement. This iterative process enables the generator to enhance its ability to produce increasingly realistic content, while the discriminator becomes better at identifying discrepancies for correction.

GANs play a pivotal role in deepfake creation by analyzing patterns in genuine images to replicate these patterns in fabricated content. For photographs, GAN systems examine multiple angles of the target’s images to capture comprehensive details. For videos, they analyze not just angles but also behavior, movement, and speech patterns. This multi-faceted analysis undergoes numerous iterations through the discriminator to refine the realism of the final product.

Deepfake videos emerge through two primary methods: manipulating an original video to alter what the target says or does (source video deepfakes) or swapping the target’s face onto another person’s body (face swaps). Source video deepfakes involve a neural network-based autoencoder that dissects the video to understand and then overlay the target’s facial expressions and body language onto the original footage. For audio, a similar process clones a person’s voice, allowing it to replicate any desired speech.

Key techniques in deepfake creation include:

  • Source video deepfakes: Utilizes an autoencoder with an encoder to analyze attributes and a decoder to apply these attributes onto the target video.
  • Audio deepfakes: Employs GANs to clone vocal patterns, enabling the creation of realistic voice replicas.
  • Lip syncing: Matches voice recordings to video, enhancing the illusion of the subject speaking the recorded words, further supported by recurrent neural networks for added realism.

The creation of deepfakes is facilitated by advanced technologies such as convolutional neural networks (CNNs) for facial recognition, autoencoders for attribute mapping, natural language processing (NLP) for generating synthetic audio, and the computing power provided by high-performance computing systems. Tools like Deep Art Effects, Deepswap, and FaceApp exemplify the accessibility of deepfake generation, pointing to a future where creating convincing deepfakes is within reach for many.

Are deepfakes legal?

At their core, deepfakes are not inherently illegal; the legality largely depends on their content and intent. This innovative technology can tread into unlawful territory if it breaches existing laws on child pornography, defamation, hate speech, or other criminal activities.

To date, there are few laws specifically targeting deepfakes and they vary significantly from country to country. However, the proliferation of deepfake technology raises concerns about the spread of false claims and their impact on public trust and discourse. There are already notable exceptions in the United States, where some states have taken action to curb the harmful effects of deepfake technology:

  • Texas: This state has enacted legislation that prohibits the creation and distribution of deepfakes with the intention of interfering with elections. This move aims to safeguard the integrity of the electoral process by preventing the spread of misleading or false information about candidates through hyper-realistic fake videos or audio recordings. Additionally, Texas has passed a law specifically targeting sexually explicit deepfakes distributed without the subject’s consent, aiming to protect individuals from distress or embarrassment caused by such content. Both offenses are treated as Class A misdemeanors, with potential penalties including up to a year in jail and fines of up to $4,000​​​​.
  • Virginia: Recognizing the personal and societal harm caused by deepfake pornography, Virginia has specifically banned the dissemination of such content. This law targets deepfakes that sexually exploit individuals without their consent, offering victims a legal avenue to seek justice. The dissemination of revenge porn, including deepfake pornography, is classified as a Class 1 misdemeanor, punishable by up to 12 months in jail and a fine of up to $2,500​​​​.
  • California: With a focus on both the political and personal ramifications of deepfakes, California has passed laws against the use of deepfakes that aim to deceive voters within 60 days of an election. Additionally, the state has made it illegal to create and distribute nonconsensual deepfake pornography, reflecting a growing concern over the use of this technology to harm individuals’ privacy and dignity.

The patchwork of regulations underscores a broader challenge: many people remain unaware of deepfake technology, its potential applications, and the risks it poses. This gap in awareness contributes to a legal environment where victims of deepfakes, outside of specific scenarios covered by state laws, often find themselves without clear legal recourse. The evolving nature of deepfake technology and its implications necessitates a more informed public, as well as comprehensive legal frameworks to protect individuals from its potential misuse.

How are deepfakes dangerous?

Deepfake technology, while innovative, introduces substantial risks. It’s not just about creating false images or videos; the implications extend into serious realms such as:

  • Personal safety, with individuals facing threats of blackmail.
  • The integrity of democratic processes, through the fabrication of misleading political content.
  • Financial markets, susceptible to manipulation from fabricated reports.
  • Identity theft with personal data at risk of being misused.

The evolving landscape necessitates a robust response, combining vigilance, technological solutions, and legal frameworks to safeguard against these emerging threats.

How to detect deepfakes?

Detecting deepfake content requires attention to specific visual and textual indicators. Here are some key signs to watch for:

Visual indicators:

  • Unusual facial positioning or awkward expressions.
  • Unnatural movements of the face or body.
  • Inconsistent coloring across the video.
  • Odd appearances upon zooming in or magnification.
  • Mismatched or inconsistent audio.
  • Lack of natural blinking in people.

Textual indicators:

  • Misspellings and grammatically incorrect sentences.
  • Sentence flow that seems unnatural.
  • Email addresses that look suspicious.
  • Phrasing that doesn’t match the expected style of the sender.
  • Messages that are out of context or irrelevant.

Additionally, consider behavioral and contextual anomalies:

  • Behavioral inconsistencies: Pay attention to the subject’s behavior and mannerisms. Deepfakes may not accurately replicate subtle personality traits, habitual movements, or emotional responses, making the subject appear slightly off.
  • Contextual anomalies: Analyze the context in which the video or audio appears. Discrepancies in the background, unexpected interactions with the environment, or anomalies in the storyline can indicate manipulation.

AI advancements are improving the detection of these signs, but staying informed about these indicators is essential for identifying deepfakes.

How to defend against deepfakes?

A key strategy in defending against deepfakes involves the use of advanced technology to identify and block these falsified media.

Government agencies like the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) are at the forefront, developing cutting-edge solutions to distinguish real from manipulated content. Similarly, social media giants and tech companies are employing innovative methods to ensure the authenticity of the media shared on their platforms.

For example, some platforms utilize blockchain technology to verify the origins of videos and images, establishing trusted sources and effectively preventing the spread of fake content.

Implementing social media policies

Recognizing the potential harm caused by malicious deepfakes, social media platforms such as Facebook and Twitter have taken a firm stance by banning them. These policies are part of a broader effort to safeguard users from the negative impacts of deceptive media, underscoring the role of platform governance in maintaining digital trustworthiness.

Adopting deepfake detection software

The battle against deepfakes is also supported by private sector innovation. A number of companies offer sophisticated deepfake detection software, providing essential tools for identifying manipulated media:

  • Adobe’s content authenticity initiative: Adobe has introduced a system allowing creators to attach a digital signature to their videos and photos. This signature includes detailed information about the media’s creation, offering a transparent method for verifying authenticity.
  • Microsoft’s detection tool: Microsoft has developed an AI-powered tool that analyzes videos and photos to assess their authenticity. It provides a confidence score indicating the likelihood of manipulation, helping users discern the reliability of the media they encounter.
  • Operation Minerva: This initiative focuses on cataloging known deepfakes and their digital fingerprints. By comparing new videos against this catalog, it’s possible to identify modifications of previously discovered fakes, enhancing the detection process.
  • Sensity’s detection platform: Sensity offers a platform that employs deep learning to detect deepfake media, analogous to how anti-malware tools identify viruses and malware. It alerts users via email if they encounter deepfake content, adding an extra layer of protection.

How can you protect yourself? Practical steps

Beyond these technological solutions, individuals can take practical steps to defend against deepfakes:

  • Stay informed: Awareness of the existence and nature of deepfakes is the first step in defense. By understanding the technology and its potential misuse, individuals can approach digital content more critically.
  • Verify sources: Always verify the source of the information. Look for corroborating evidence from reputable sources before accepting any media as true.
  • Use trusted detection tools: Employ deepfake detection tools where available. Many companies and platforms provide tools or plugins designed to identify manipulated content.
  • Report suspicious content: If you encounter what appears to be a deepfake, report it to the platform hosting the content. User reports play a crucial role in helping platforms identify and take action against deceptive media.

Defending against deepfakes requires a multi-faceted approach, combining technological innovation, policy enforcement, and informed vigilance. By staying informed and leveraging available tools and strategies, individuals and organizations can better protect themselves from the pernicious effects of deepfake technology.

What are deepfakes used for?

Deepfakes, while often discussed in the context of their potential for harm, have a range of applications across various fields. Understanding these uses helps to appreciate the complexity and dual nature of deepfake technology. Here are some key applications:

  • Entertainment and media: Deepfake technology is increasingly used in movies and video games to enhance visual effects, such as ageing or de-ageing actors, or bringing deceased performers back to life for cameo appearances. This application can also extend to creating realistic virtual avatars for online interactions.
  • Education and training: In educational contexts, deepfakes can create immersive learning experiences, such as historical reenactments or simulations. For instance, they can bring historical figures to life, offering students a dynamic way to engage with history.
  • Art and creativity: Artists and creatives are exploring deepfakes as a new medium for expression. This includes generating new forms of digital art, satire, or exploring the boundaries between reality and artificiality.
  • Advertising and marketing: Brands can use deepfake technology to create more engaging and personalized marketing content. For example, deepfakes allow the use of brand ambassadors in various campaigns without their physical presence, potentially in different languages to cater to a global audience.
  • Political and social campaigns: Though controversial, deepfakes have been used to raise awareness about social issues or the potential dangers of misinformation. Carefully crafted deepfakes can highlight the importance of critical thinking in the digital age.
  • Synthetic media creation: Deepfakes are part of a broader category of synthetic media, used for generating realistic audio, video, or images for content creation. This can streamline the production process in news, documentaries, and other media forms.

While deepfakes hold promising potential across these applications, it’s crucial to navigate their use responsibly, ensuring ethical standards are maintained to prevent misuse and protect individuals’ rights and privacy.

Examples of deepfakes

Well-known examples of deepfakes include a video of a fake Barack Obama making fun of Donald Trump, a fake Mark Zuckerberg bragging about controlling the data of billions of people, and a roundtable of deepfaked celebrities like Tom Cruise, George Lucas, Robert Downey Jr., Jeff Goldblum, and Ewan McGregor.

What is Social Engineering?

What is Catfishing?

Risks of AI in Cyber Security

What is Phishing?

FAQs

Individuals can sue for deepfake creation, citing defamation, emotional distress, or intellectual property violation. Legal consultation is crucial for understanding rights and options.