The European Commission has firmly rejected Meta CEO Mark Zuckerberg’s recent criticism of the European Union’s DSA data laws, clarifying that the bloc’s regulations are focused on removing illegal content rather than censoring lawful speech.
Zuckerberg made the remarks about the DSA after Meta announced the end of its fact-checking programs in the United States, replacing them with a community-driven moderation system. According to Reuters, the Meta chief argued that Europe’s digital policies are increasingly restrictive, stating, “Europe has an ever increasing number of laws institutionalising censorship and making it difficult to build anything innovative there.”
The EU executive was quick to push back against these claims, emphasizing that its Digital Services Act (DSA) does not require platforms to suppress lawful content. Instead, the legislation mandates that large platforms take down illegal content and protect users, particularly children, from harmful material.
“We absolutely refute any claims of censorship,” a spokesperson for the European Commission stated, per Reuters. The spokesperson further explained that the DSA allows platforms some flexibility in choosing their moderation methods, as long as they meet the EU’s standards for effectiveness.
Following Meta’s decision to drop its U.S. fact-checking efforts, Zuckerberg indicated that the company would implement a “community notes” feature across Facebook, Instagram, and Threads. This system, modeled after the approach used by X (formerly Twitter), enables users to flag potentially misleading posts, with notes becoming public if they are rated as helpful by a diverse group of contributors.
While this approach has gained traction in the U.S., the European Commission noted that such a system would need to undergo a risk assessment before being deployed in the EU. According to Reuters, the Commission clarified that it does not dictate specific moderation tools but expects platforms to ensure their chosen methods are effective in preventing the spread of harmful content.
“Whatever model a platform chooses needs to be effective,” the EU spokesperson added. “So we are checking the effectiveness of the measures or content moderation policies adopted and implemented by platforms here in the EU.”
Source: Reuters
Featured News
Federal Judge Signals Revisions Likely in DOJ Case Targeting Live Nation Monopoly
Jan 22, 2025 by
CPI
American Airlines and JetBlue Agree to $2 Million Legal Fee Settlement with U.S. States
Jan 22, 2025 by
CPI
Federal Judge Dismisses Class Action Alleging Inflated Yacht Commission Fees
Jan 22, 2025 by
CPI
Doug Gurr Appointed Interim Chairman of UK’s Competition Authority
Jan 22, 2025 by
CPI
LinkedIn Faces Lawsuit Over Alleged Misuse of Customer Data for AI Training
Jan 22, 2025 by
Amanda Adams
Antitrust Mix by CPI
Antitrust Chronicle® – Pharmacy Benefit Managers
Jan 20, 2025 by
CPI
Untangling the PBM Mess
Jan 20, 2025 by
Kent Bernard
Using Data, Not Anecdotes, to Analyze Criticisms of Pharmacy Benefit Managers
Jan 20, 2025 by
Dennis Carlton
Vertical Integration and PBMs: What, Me Worry?
Jan 20, 2025 by
Lawton Robert Burns & Bradley Fluegel
The Economics of Benefit Management in Prescription-Drug Markets
Jan 20, 2025 by
Casey B. Mulligan