The Senate building in Sacramento, California.

Deepfakes laws and proposals flood US

In a rare example of legislative haste, roughly one dozen state and federal bills were introduced in the past 12 months to regulate deepfakes, the relatively modern technology that some fear could upend democracy.

Though the federal proposals have yet to move forward, the state bills have found quick success at home. Already three states—California, Virginia, and Texas—have enacted deepfake laws, and legislation is pending in Massachusetts, New York, and Maryland, which introduced its own bill on January 16 this year.

The laws and pending legislation vary in scope, penalties, and focus.

The Virginia law amends current criminal law on revenge porn, making it a crime to, for instance, insert a woman’s digital likeness into a pornographic video without her consent. The Texas law, on the other hand, prohibits the use of deepfakes for election interference, like making a video that fraudulently shows a political candidate at a Neo-Nazi rally one month prior to an election.

A New York bill tackles an entirely different subject—how to treat a deceased person’s digital likeness, a reality that is coming to a screen near you (starring James Dean). And two state laws potentially address the rising threat of “cheapfakes,” low-tech digital frauds that require no artificial intelligence tools to make.

This legislative experimentation is expected for an emerging technology, said Matthew F. Ferraro, a senior associate at the law firm WilmerHale who advises clients on national security, cyber security, and crisis management.

“In some ways, this is [an example] of the laboratories of democracy,” Ferraro said, citing an idea popularized decades ago by Supreme Court Justice Louis Brandeis. “This is what people cheer about.”

But one category of deepfakes legislation has earned more upset than others—the kind that solely regulates potential election interference. Groups including the American Civil Liberties Union, Electronic Frontier Foundation, and First Draft, which researches and combats disinformation, warn of threats to free speech.

Further, prioritizing political deepfakes legislation could, in effect, deprioritize the larger problem of deepfake pornography, which accounts for a whopping 96 percent of deepfake material online today, said Adam Dodge, founder of the nonprofit End Technology-Enabled Abuse, or EndTAB.

“I think it’s important that we address future harm, I just don’t want that to come at the expense of the people being harmed right now,” Dodge said. “We have four deepfakes laws on the books in the United States, and 50 percent of them don’t address 96 percent of the problem.”

Today, Malwarebytes provides a more detailed look at deepfakes legislation and laws in the United States, following our analysis last week of the country’s first-ever, federal deepfake rules. Far beyond what that language requires, the following bills and laws call for civil and criminal penalties, and directly address concerns of both political disinformation and nonconsensual pornography.

Federal deepfakes legislation before Congress

Before lawmakers in Washington, DC, are at least four federal deepfakes bills in both the US House of Representatives and the Senate. They are:

  • The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act
  • The Deepfake Report Act of 2019
  • A Bill to Require the Secretary of Defense to Conduct a Study on Cyberexploitation of Members of the Armed Forces and Their Families and for Other Purposes
  • The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject (DEEP FAKES) to Accountability Act

The bills largely hew to one another. The IOGAN Act, for example, would require the directors of both the National Science Foundation and the National Institute of Standards and Technology to submit reports to Congress about potential research opportunities with the country’s private sector in detecting deepfakes.

The Deepfake Report Act would require the Department of Homeland Security to submit a report to Congress about the technologies used to create and detect deepfakes. Senator Ben Sasse’s “cyberexploitation” bill would require the Secretary of Defense to study the potential vulnerabilities of US armed forces members to cyberexploitation, including “misappropriated images and videos as well as deep fakes.”

The DEEP FAKES Accountability Act, however, extends beyond reporting and research requirements. If passed, the bill would require anyone making a deepfake—be it image, audio, or video—to label the deepfake with a “watermark” that shows the deepfake’s fraudulence.

But Dodge said watermarks and labels would fail to help anyone whose likeness is used in a nonconsensual deepfake porn video.

“The reality is, when it comes to the battle against deepfakes, everybody is focused on detection, on debunking and unmasking a video as a deepfake,” Dodge said. “That doesn’t help women, because the people watching those videos don’t care that they’re fake.”

The DEEP FAKES Accountability Act would make it a crime to knowingly fail to provide that watermark, punishable by up to five years in prison. Further, the bill would impose a civil penalty of up to $150,000 for each purposeful failure to provide a watermark on a deepfake.

According to Electronic Frontier Foundation, those are severe penalties for activities that the bill itself fails to fully define. For example, making a deepfake with the intent to “humiliate” someone would become a crime, but there is no clear definition of what that term means, or whether that humiliation would require harm. In the bill’s attempt to stop deceitful and malicious activity, the organization said, it may have reached too far.

“The [DEEP FAKES Accountability Act] underscores a key question that must be answered: from a legal and legislative perspective, what is the difference between a malicious ‘deepfakes’ video and satire, parody, or entertainment?” the organization wrote. “What lawmakers have discussed so far shows they do not know how to make these distinctions.”

Statewide, the concerns shift to whether deepfakes legislation will have its intended effect.

State deepfakes laws and legislation

Last summer, the warnings about the democratization of deepfakes technology became reality—a new app offered for free on Windows gave users the ability to remove clothes from uploaded photos of women. The app, called DeepNude, was first discovered by Motherboard. It shut down just hours after the outlet published its first piece.

Less than one week later, a new deepfake law in Virginia came into effect. The state’s lawmakers had passed it months earlier, in March.

Unique when compared to later state deepfake laws, Virginia’s law did not craft a new crime for deepfake creation and distribution, but instead expanded its current law on revenge porn to include deepfakes material. Now, in Virginia, anyone who shares or sells nude or sexual images and videos—including deepfakes—with the intent to “coerce, harass, or intimidate,” is guilty of a Class 1 misdemeanor.

Dodge said he appreciated Virginia’s approach.

“The Virginia law is interesting because it’s the only law that has taken the existing nonconsensual pornography criminal code section and amended it to include deepfakes pornography,” Dodge said, “and I like that.”

Shortly after Virginia enacted its deepfake law, Texas followed, passing a law that instead focused on election interference. According to the law, the act of creating and sharing a deepfake video within 30 days of an election with the intent to “injure a candidate or influence the result of an election” is now a Class A misdemeanor. The law’s definition of a deepfake is broad: a video “created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”

The law has already received a high-profile use-case: Houston Mayor Sylvester Turner asked the district attorney to investigate his opponent’s campaign for making a television ad that showed edited photos of the mayor, along with an allegedly fake text he sent.

In October, California followed both Virginia and Texas, passing two laws—one to prohibit nonconsensual deepfake pornography, and the other to prohibit deepfakes used to impact the outcome of an upcoming election.

The bills’ author—Assembly Member Marc Berman—said he wrote the latter bill after someone created and shared an altered video of Nancy Pelosi, appearing to show her as impaired or drunk. But the video was far from a deepfake. Instead, its creator simply took footage of the Speaker of the House of Representatives and slowed it down, making what is now referred to as a “cheapfake.”

Ferraro said that trying to pass legislation to prevent cheapfakes will be difficult, though.

“It’s going to be very hard to write a bill that captures all of those so-called cheapfakes, because the regular editing of videos could fall under a definition that is too broad,” Ferraro said, explaining that standard, everyday broadcast interviews incorporate countless edits that might change the overall impression of the interview to audiences, even when the edits are done for non-malicious reasons, like cutting away from a political candidate giving a speech to show their audience.

“That’s the sort of problem of the cheapfake: Simple editing can give vastly different impressions, based on the content,” Ferraro said.

As California, Texas, and Virginia work out the enforcement of their laws, Maryland, New York, and Massachusetts are considering their own approaches to legislating deepfakes.

On January 16, Maryland introduced a bill targeting political influence deepfakes. The bill, which has a scheduled hearing in early February, prohibits individuals from “willfully or knowingly influencing or attempting to influence a voter’s decision to go to the polls or to cause a vote for a particular candidate by publishing, distributing, or disseminating a deepfake online within 90 days of an election.”

The Massachusetts deepfake legislation would criminalize the use of deepfakes for already “criminal or tortious conduct,” in effect making it illegal to use a deepfake in conjunction with completing other crimes. So, committing fraud? That’s a crime. But deploying a deepfake to aid in committing that fraud? Well, that would also be a crime.

Finally, in New York, state lawmakers are trying to legislate a different aspect of deepfakes and digital recreations—the rights to an individual’s digital likeness. The bill was introduced last year, expired, and was then re-introduced. It would protect a person’s digital likeness 40 years after their death. The bill would also create a registry for surviving family members to record their control of a deceased relative’s likeness.

The Screen Actors Guild‐American Federation of Television and Radio Artists supported the bill.

“The state’s robust performance community should not have to endure years of costly litigation to protect their basic livelihood and artistic legacy,” the group said.

Major motion picture studios, including Disney, Warner Bros., and NBCUniversal opposed the bill. Though Disney’s filmmakers said they received approval from the estate of the late actor Peter Cushing to use his likeness in the 2017 movie Star Wars: Rogue One, it’s not hard to see why required approval for future projects would prove an obstacle for Hollywood.

What next?

The opposition to deepfakes laws is clear: Such laws could be overbroad, uninformed, and, in their attempt to regulate one problem, actually trample on the protected rights of Americans.

The bigger question is, has the opposition been successful? In a word, no.

Texas passed its election interference deepfake law with no recorded opposition votes in either the House or the Senate (though two House members were a “no vote” and four abstained). California similarly passed its election interference deepfake law in a 67–4 vote in the Assembly and a 29–7 vote in the Senate. After the vote, the ACLU of California wrote to the California governor, asking for a veto. It didn’t work.  

In Washington, DC, though, the situation could be different, since new Federal rules on deepfakes research were approved last month. Those rules require the Director of National Intelligence to submit a report to Congress in 180 days about deepfakes capabilities across the world and possible countermeasures in the US. Until that report is submitted, Senators and Representatives might have little appetite to move forward.

Much like the statewide sweep of data privacy laws last year, the future of deepfake laws depends on political will, popularity, and whether lawmakers even have time to draft and pass such legislation. It is, after all, an election year.


David Ruiz

Pro-privacy, pro-security writer. Former journalist turned advocate turned cybersecurity defender. Still a little bit of each. Failing book club member.