U.S. Senate Passes Anti-Deepfake Law Targeting Non-Consensual Pornography - Decrypt
08/13/2024 05:56Under the act, victims will be able to take action against “digital forgeries” using their likeness for the proliferation of sexual content.
The U.S. Senate has voted on an “anti-deepfake” law designed to protect individuals from the non-consensual use of their likeness in pornographic content.
The DEFIANCE (Disrupt Explicit Forged Images And Non-Consensual Edits) Act, introduced by Sen. Richard Durbin (D-IL) and passed through the Senate on Monday, seeks to “improve rights to relief for individuals affected by non-consensual activities,” a copy of the bill's text reads.
It has now passed to the House of Representatives, where it awaits review before making its way to President Joe Biden’s desk before being enshrined into law.
Under the act, victims will be able to take action against “digital forgeries” conducted through the use of software, machine learning, AI, or other computer-generated or technological means that falsely appear to be authentic sexualized images for up to ten years—double the standard statute of limitations.
It would allow identifiable individuals to recover damages, including liquidated damages up to $250,000, litigation costs, and actual damages, while also enabling courts to grant punitive and equitable relief, such as injunctions. Additionally, it provides privacy protections for plaintiffs by allowing the use of pseudonyms and controlling the disclosure of sensitive information in court.
Action can be taken not only against the person who created the material but also against anyone who distributes or has it in their possession.
However, experts have warned the bill only covers the tip of the iceberg in terms of addressing the potential damage that deepfakes can cause.
While the act only applies to sexualized images, Svetlana Sicular, VP analyst at Gartner, told Decrypt the move needs to be the start, not the end, to criminalize such behavior.
“I see this law is focused on porn images, which is great to defend individuals,” Sicular said. “Our society should be protected on the individual, business, political, and geopolitical levels.”
Sicular also voiced her concerns featuring political deepfakes “especially affecting the outcomes of democratic elections,” and also with deepfakes that undermine businesses, including impersonations for embezzling, reducing competition, and damaging reputations.
“Deepfakes are aggressively used by state actors like Russia to achieve advances in ‘hybrid warfare’ - a mix of kinetic, cyber, and disinformation attacks,” Sicular said. “We are entering a new world, where critical thinking and legal protection against AI harms are crucial.”
Gartner research indicates that 62% of CEOs and senior business executives think deepfakes will create at least some operating costs and complications for their organization in the next three years, and 5% consider it an existential threat.
For example, there was no pornography involved when cybercriminals used AI deepfakes to falsely pose as the CFO of engineering firm Arup and convince an employee to transfer $25 million out of the company’s bank accounts.
Meanwhile, Politico has dubbed this the era of the “AI election” due to the prevalence of deepfakes, distrust, and disinformation. Already, the Republican and Democrat parties are waging information wars regarding the size of crowds at their respective campaign events.
There are further bills in government to address deepfakes in a more holistic sense, such as H. R. 5586, which would attack the use of deepfake technology to produce “any advanced technological false personation record that contains a moving visual element.” That bill, however, has not progressed since September 2023.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.