
The European Commission has firmly rejected Meta CEO Mark Zuckerberg’s recent criticism of the European Union’s DSA data laws, clarifying that the bloc’s regulations are focused on removing illegal content rather than censoring lawful speech.
Zuckerberg made the remarks about the DSA after Meta announced the end of its fact-checking programs in the United States, replacing them with a community-driven moderation system. According to Reuters, the Meta chief argued that Europe’s digital policies are increasingly restrictive, stating, “Europe has an ever increasing number of laws institutionalising censorship and making it difficult to build anything innovative there.”
The EU executive was quick to push back against these claims, emphasizing that its Digital Services Act (DSA) does not require platforms to suppress lawful content. Instead, the legislation mandates that large platforms take down illegal content and protect users, particularly children, from harmful material.
“We absolutely refute any claims of censorship,” a spokesperson for the European Commission stated, per Reuters. The spokesperson further explained that the DSA allows platforms some flexibility in choosing their moderation methods, as long as they meet the EU’s standards for effectiveness.
Following Meta’s decision to drop its U.S. fact-checking efforts, Zuckerberg indicated that the company would implement a “community notes” feature across Facebook, Instagram, and Threads. This system, modeled after the approach used by X (formerly Twitter), enables users to flag potentially misleading posts, with notes becoming public if they are rated as helpful by a diverse group of contributors.
While this approach has gained traction in the U.S., the European Commission noted that such a system would need to undergo a risk assessment before being deployed in the EU. According to Reuters, the Commission clarified that it does not dictate specific moderation tools but expects platforms to ensure their chosen methods are effective in preventing the spread of harmful content.
“Whatever model a platform chooses needs to be effective,” the EU spokesperson added. “So we are checking the effectiveness of the measures or content moderation policies adopted and implemented by platforms here in the EU.”
Source: Reuters
Featured News
Charter to Acquire Cox Communications in $35 Billion Deal
May 22, 2025 by
CPI
FTC Targets Media Watchdog Over Alleged Collusion Against Musk’s X
May 22, 2025 by
CPI
FTC Drops Antitrust Case Accusing Pepsi of Squeezing Small Retailers
May 22, 2025 by
CPI
Shein Warns of Higher Costs for French Shoppers Amid EU Fee Proposal
May 22, 2025 by
CPI
DOJ Opens Antitrust Probe of Google’s AI Partnership with Character.AI
May 22, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Industrial Policy
May 21, 2025 by
CPI
Industrial Strategy and the Role of Competition – Taking a Business Lens
May 21, 2025 by
Marcus Bokkerink
Industrial Policy, Antitrust, and Economic Growth: Some Observations
May 21, 2025 by
David S. Evans
Bolder by Design: Crafting Pro-Competitive Industrial Policies For Complex Challenges
May 21, 2025 by
Antonio Capobianco & Beatriz Marques
Competition-Friendly Industrial Policy
May 21, 2025 by
Philippe Aghion, Mathias Dewatripont & Patrick Legros