
Elon Musk’s social media company, X, has taken legal action against the state of Minnesota, arguing that a new law banning the use of AI-generated “deepfakes” in election contexts infringes on constitutional rights. The lawsuit, filed Wednesday in federal court, claims the legislation violates both federal and state free speech protections, per Reuters.
The Minnesota statute prohibits the use of AI-altered images, audio, or video—commonly referred to as deepfakes—intended to deceive voters in political campaigns. X contends that this regulation imposes unjust burdens on digital platforms, replacing their discretion over user content with the authority of the state, and exposing them to potential criminal liability if they fail to correctly interpret or moderate such material. According to Reuters, the company argues this framework could suppress a wide range of legitimate political expression.
“This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary,” the lawsuit states.
Musk, who has repeatedly emphasized his stance as a “free speech absolutist,” dismantled many of Twitter’s previous content moderation policies after acquiring and rebranding the platform as X in 2022. The lawsuit marks a continued resistance to governmental intervention in digital speech, particularly when it comes to regulating politically sensitive content.
Minnesota Attorney General Keith Ellison, named as the defendant in the case, has not yet issued a public response to the lawsuit, Reuters reported.
The contested law is part of a broader movement across the U.S. to address the growing influence of AI in political discourse. According to data collected by Public Citizen and cited by Reuters, at least 22 states have enacted legislation targeting the use of deepfakes in electoral processes, citing risks that manipulated media could mislead voters and distort democratic outcomes.
In addition to claiming First Amendment violations, X’s lawsuit argues the law is unconstitutionally vague and conflicts with Section 230 of the Communications Decency Act—a federal statute that protects online platforms from liability for user-generated content. The company is seeking a permanent injunction to prevent Minnesota from enforcing the statute.
This isn’t the first legal challenge to Minnesota’s deepfake law. Earlier this year, Republican state legislator Mary Franson and conservative influencer Christopher Kohls filed a similar suit. While U.S. District Judge Laura Provinzino denied their request for a preliminary injunction in January, she did not rule on the underlying constitutional questions. That case is currently under appeal.
Source: Reuters
Featured News
Trump Administration Steps Up Pressure On EU Digital Laws
May 18, 2025 by
CPI
Elton John Slams UK Government’s AI Copyright Plan as ‘Theft’
May 18, 2025 by
CPI
Anthropic’s Legal Team Blames AI “Hallucination” for Citation Error in Copyright Lawsuit
May 18, 2025 by
CPI
Intel Challenges €376 Million EU Antitrust Fine in Ongoing Legal Battle
May 18, 2025 by
CPI
FTC Chairman Highlights Fiscal Responsibility and Consumer Protection in House Testimony
May 18, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Healthcare Antitrust
May 14, 2025 by
CPI
Healthcare & Antitrust: What to Expect in the New Trump Administration
May 14, 2025 by
Nana Wilberforce, John W O'Toole & Sarah Pugh
Patent Gaming and Disparagement: Commission Fines Teva For Improperly Protecting Its Blockbuster Medicine
May 14, 2025 by
Blaž Višnar, Boris Andrejaš, Apostolos Baltzopoulos, Rieke Kaup, Laura Nistor & Gianluca Vassallo
Strategic Alliances in the Pharma Sector: An EU Competition Law Perspective
May 14, 2025 by
Christian Ritz & Benedikt Weiss
Monopsony Power in the Hospital Labor Market
May 14, 2025 by
Kevin E. Pflum & Christian Salas