US Politicians Advocate for AI Legislation Against Deepfake Images After Taylor Swift Incident

In a swift response to the widespread dissemination of explicit deepfake photos featuring Taylor Swift, US politicians are calling for new legislation to criminalize the creation and sharing of such deceptive content. The fabricated images of the pop sensation garnered millions of views on social media platforms, including X and Telegram.
US Representative Joe Morelle expressed his dismay at the spread of these manipulated images, deeming it “appalling.” He emphasized the urgent need for legal measures to address the issue, stating, “The images and videos can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted.”
Social media platform X issued a statement, noting that it is actively removing the deepfake images and taking appropriate actions against the accounts involved in their dissemination. The platform assured users that it is closely monitoring the situation to promptly address any further violations and ensure the removal of such content.
Read more: ChatGPT, Bard & Co.: An Introduction To AI For Competition And Regulatory Lawyers
Despite efforts to take down the images, one particular photo of Taylor Swift had reportedly been viewed 47 million times before being removed. As a preventive measure, X has made the term “Taylor Swift” unsearchable, along with related terms like “Taylor Swift AI” and “Taylor AI.”
Deepfakes, which use artificial intelligence to manipulate faces or bodies in videos, have seen a significant rise, with a 550% increase in doctored images since 2019, according to a 2023 study. Currently, there are no federal laws in the United States against the creation or sharing of deepfake images, but some states have taken steps to address the issue.
Democratic Representative Joe Morelle, who proposed the Preventing Deepfakes of Intimate Images Act in the previous year, urged immediate action. The proposed act aimed to make it illegal to share deepfake pornography without consent. Morelle emphasized the disproportionate impact on women, with 99% of deepfake content targeting women, as reported in the State of Deepfakes study.
In the UK, the sharing of deepfake pornography was made illegal in 2023 as part of the Online Safety Act. Concerns about AI-generated content have escalated globally, particularly in light of ongoing elections, as evidenced by a recent investigation into a fake robocall claiming to be from US President Joe Biden, suspected to be generated by AI. Swift’s team is reportedly considering legal action against the site responsible for publishing the AI-generated images.
Source: BBC
Featured News
Boeing to Sell Key Digital Aviation Units to Thoma Bravo in $10.55 Billion Deal
Apr 22, 2025 by
CPI
Claus-Dieter Ehlermann, Key Figure in EU Antitrust Policy, Dies at 94
Apr 22, 2025 by
CPI
Instagram Co-Founder Claims Zuckerberg Starved It of Resources After Acquisition
Apr 22, 2025 by
CPI
Binance Advises Governments on Crypto Rules and Digital Asset Reserves
Apr 22, 2025 by
CPI
OpenAI Eyes Chrome If DOJ Forces Google to Sell Browser, Exec Testifies
Apr 22, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Mergers in Digital Markets
Apr 21, 2025 by
CPI
Catching a Killer? Six “Genetic Markers” to Assess Nascent Competitor Acquisitions
Apr 21, 2025 by
John Taladay & Christine Ryu-Naya
Digital Decoded: Is There More Scope for Digital Mergers In 2025?
Apr 21, 2025 by
Colin Raftery, Michele Davis, Sarah Jensen & Martin Dickson
AI In the Mix – An Ever-Evolving Approach to Jurisdiction Over Digital Mergers in Europe
Apr 21, 2025 by
Ingrid Vandenborre & Ketevan Zukakishvili
Antitrust Enforcement Errors Due to a Failure to Understand Organizational Capabilities and Dynamic Competition
Apr 21, 2025 by
Magdalena Kuyterink & David J. Teece