
US Senator Michael Bennet, a Democrat from Colorado, recently addressed a letter to major technology and generative AI companies, calling for them to label AI-generated content and limit the spread of fake or misleading material. Bennet cited several recent examples of AI-generated content causing alarm and market turbulence. He also underscored the importance of Americans knowing when AI is being used to shape political content.
“Fabricated images can derail stock markets, suppress voter turnout, and shake Americans’ confidence in the authenticity of campaign material,” Bennet said.
OpenAI CEO Sam Altman testified before the Senate Judiciary Committee, highlighting AI’s impact on the spread of false information. Bennet applauds the steps taken by technology companies to identify and label AI-generated content, but acknowledges that these measures are voluntary and can be easily bypassed.
“Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly,” Bennet said during his letter.
U.S. lawmaker N/A echoed Bennet’s sentiments, arguing that platforms ought to update their policies given the availability of generative AI tools to all.
Related: EU Commissioner Says AI-Generated Content Should Be Labelled
“We cannot expect users to dive into the metadata of every image in their feeds, nor should platforms force them to guess the authenticity of content shared by political candidates, parties, and their supporters,” N/A said.
Meanwhile, other lawmakers, including Senate Majority Leader Chuck Schumer, have expressed interest in introducing legislation to regulate AI. Bennet has gone on to introduce a bill requiring political ads to disclose whether AI was used in the production process.
“Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality,” Bennet said.
Bennet’s letter asked the executives about the standards and requirements they employ to identify AI content and how those standards were developed and audited. He also inquired about the consequences for users who violate the rules.
Twitter responded to a request for comment with a poop emoji, while Microsoft declined to comment and TikTok, OpenAI, Meta, and Alphabet did not respond immediately.
As AI-generated content becomes more prevalent and nefarious, U.S. Senator Michael Bennet is pressing for major technology and generative AI companies to act responsibly and promptly to protect public discourse and electoral integrity. Bennet’s letter and subsequent bill demonstrate a sense of urgency and awareness into the risks posed by artificial intelligence and the powerful implications it has on our democracy.
Featured News
Meta Begins Defense After FTC Concludes Case in Landmark Antitrust Trial
May 15, 2025 by
CPI
UK Data Bill Still No Closer to Passage As Parliamentary ‘Ping-Pong’ Drags On
May 15, 2025 by
CPI
Regeneron Pharmaceuticals Awarded $271.2M in Damages Against Amgen
May 15, 2025 by
CPI
FTC Chair Proposes 15% Staff Reduction Amid Budget Constraints
May 15, 2025 by
CPI
UK Urges Antitrust Watchdog to Prioritize Growth and Clarity in Business Regulation
May 15, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Healthcare Antitrust
May 14, 2025 by
CPI
Healthcare & Antitrust: What to Expect in the New Trump Administration
May 14, 2025 by
Nana Wilberforce, John W O'Toole & Sarah Pugh
Patent Gaming and Disparagement: Commission Fines Teva For Improperly Protecting Its Blockbuster Medicine
May 14, 2025 by
Blaž Višnar, Boris Andrejaš, Apostolos Baltzopoulos, Rieke Kaup, Laura Nistor & Gianluca Vassallo
Strategic Alliances in the Pharma Sector: An EU Competition Law Perspective
May 14, 2025 by
Christian Ritz & Benedikt Weiss
Monopsony Power in the Hospital Labor Market
May 14, 2025 by
Kevin E. Pflum & Christian Salas