A PYMNTS Company

US Politicians Advocate for AI Legislation Against Deepfake Images After Taylor Swift Incident

 |  January 30, 2024

In a swift response to the widespread dissemination of explicit deepfake photos featuring Taylor Swift, US politicians are calling for new legislation to criminalize the creation and sharing of such deceptive content. The fabricated images of the pop sensation garnered millions of views on social media platforms, including X and Telegram.

US Representative Joe Morelle expressed his dismay at the spread of these manipulated images, deeming it “appalling.” He emphasized the urgent need for legal measures to address the issue, stating, “The images and videos can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted.”

Social media platform X issued a statement, noting that it is actively removing the deepfake images and taking appropriate actions against the accounts involved in their dissemination. The platform assured users that it is closely monitoring the situation to promptly address any further violations and ensure the removal of such content.

Read more: ChatGPT, Bard & Co.: An Introduction To AI For Competition And Regulatory Lawyers

Despite efforts to take down the images, one particular photo of Taylor Swift had reportedly been viewed 47 million times before being removed. As a preventive measure, X has made the term “Taylor Swift” unsearchable, along with related terms like “Taylor Swift AI” and “Taylor AI.”

Deepfakes, which use artificial intelligence to manipulate faces or bodies in videos, have seen a significant rise, with a 550% increase in doctored images since 2019, according to a 2023 study. Currently, there are no federal laws in the United States against the creation or sharing of deepfake images, but some states have taken steps to address the issue.

Democratic Representative Joe Morelle, who proposed the Preventing Deepfakes of Intimate Images Act in the previous year, urged immediate action. The proposed act aimed to make it illegal to share deepfake pornography without consent. Morelle emphasized the disproportionate impact on women, with 99% of deepfake content targeting women, as reported in the State of Deepfakes study.

In the UK, the sharing of deepfake pornography was made illegal in 2023 as part of the Online Safety Act. Concerns about AI-generated content have escalated globally, particularly in light of ongoing elections, as evidenced by a recent investigation into a fake robocall claiming to be from US President Joe Biden, suspected to be generated by AI. Swift’s team is reportedly considering legal action against the site responsible for publishing the AI-generated images.

Source: BBC