A new bill introduced by a group of bipartisan Senators seeks to combat the misuse of artificial intelligence (AI) deep fakes by mandating watermarking of such content.
The bill, presented by Senator Maria Cantwell (D-WA), Senator Marsha Blackburn (R-TN), and Senator Martin Heinrich (D-NM), proposes a standardized method for watermarking content generated by AI.
Dubbed the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED), it will bolster creator protection and establish controls on the types of content on which AI can be trained.
According to Cantwell, the bill will offer the “much-needed transparency” into AI-generated content while putting “creators, including local journalists, artists, and musicians, back in control of their content.”
If passed, the bill will also require AI service providers like OpenAI to mandate users to embed information regarding the origin of the content they generate. Further, this has to be implemented in a way that is “machine-readable” and cannot be bypassed or removed using AI-based tools.
The Federal Trade Commission (FTC) will oversee the enforcement of the COPIED Act. The regulator will treat violations as unfair or deceptive acts, similar to other breaches under the FTC Act.
With the introduction of AI, there has been much debate around its ethical implications, considering the technology’s ability to scrape huge volumes of data from across the web.
These concerns were evident when tech giant Microsoft stepped back from taking up board seats at OpenAI.
“Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content,” said Senator Blackburn.
The proposed bill coincides with a 245% surge in frauds and scams using Deepfake content. A report from Bitget estimates losses from these schemes will be valued at $10 billion by 2025.
Within the crypto space, scammers have been leveraging AI to impersonate prominent personalities like Elon Musk and Vitalik Buterin to dupe users.
In June 2024, a customer of crypto exchange OKX lost over $ 2 million after attackers managed to bypass the platform’s security using deepfake videos of the victim. The month before, Hong Kong authorities cracked down on a scam platform using Elon Musk’s likeness to mislead investors.
Meanwhile, tech behemoth Google was recently criticized by National Cybersecurity Center (NCC) founder Michael Marcotte for inadequate preventive measures against crypto-targeted deep fakes.