Since Taylor Swift’s AI-generated deepfake went viral on X, Microsoft fixed its generative AI tool, and lawmakers have proposed a new regulation.
Since Taylor Swift’s AI-generated deepfake went viral on X, Microsoft fixed its generative AI tool, and lawmakers have proposed a new regulation.
Swift’s deepfakes were first shared on Telegram and were later picked up by an unknown user on X.
Faith can move mountains, and Taylor Swift can swing tech companies and lawmakers into instituting artificial intelligence (AI) protections. The creative potential for generative AI image creation tools has often been used maliciously, but it took an uproar from pop icon Taylor Swift fans to spur them into action.
Following Swift’s pornographic AI-generated deepfakes going viral on X (formerly Twitter) after initially being shared on Telegram,
- Microsoft rolled out an important update for Microsoft Designer that disallows users to generate nudity-driven inappropriate images through misspelled prompts.
- X is actively curbing the spread of said images, including shutting down the account that initially posted them and curbing the spread by removing the content and blocking searches.
- Lawmakers introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act.
- The White House has spoken out against the dangers of the technology and encouraged Congress to act.
Deepfakes have been around for quite a while, but it helps the cause of instituting greater AI safety if it is the TIME Person of the Year with an over-enthusiastic fanbase who was victimized.
“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual intimate imagery,” White House press secretary Karine Jean-Pierre said late last week calling the incident alarming.
Even as social media companies are responsible for moderating abusive content, efforts should also be directed at containing the creation of such content. Microsoft’s clampdown on content filters in Designer, which derives its generative AI powers from OpenAI’s DALL-E 3, is certainly helping in this regard, considering the generative AI tool is now refraining from delivering celebrity images.
“I think it behooves us to move fast on this,” Microsoft CEO Satya Nadella told NBC. “I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced.”
“There’s a lot to be done and a lot being done there,” Nadella continued. “But it is about global, societal, you know, I’ll say convergence on certain norms, especially when you have law and law enforcement and tech platforms that can come together, I think we can govern a lot more than we give ourselves credit for.”
However, in case guardrails may prove to be ineffective, allowing the generation of explicit images that can be used for intimidation or harassment. In that case, the DEFIANCE Act was proposed with bipartisan sponsors Senate majority whip Dick Durbin (D-IL), senators Amy Klobuchar (D-MN), Lindsey Graham (R-SC), and Josh Hawley (R-MO).
The legislation opens up creators of nonconsensual, sexualized/intimate images using AI/ML or computer or technological means to be subject to civil action lawsuits over digital forgery. It entitles the victim with financial damages as relief.
It remains to be seen if the DEFIANCE Act goes down the way multiple other proposed legislation, such as the DEEPFAKES Accountability Act, the AI Disclosure Act of 2023, and the AI Labeling Act of 2023.