
By: David Coldewey (TechCrunch)
The use of AI is still a work in progress, and caution should be exercised regarding its potential to confidently provide incorrect information. However, it appears that certain languages are more susceptible to this than others.
The reason for this was recently investigated by NewsGuard, a misinformation watchdog group, who discovered that ChatGPT produces more inaccurate information in Chinese dialects than in English.
They conducted tests by prompting the language model to write news articles about various false claims attributed to the Chinese government, such as the notion that the protests in Hong Kong were orchestrated by U.S.-associated agents provocateurs.
ChatGPT only complied with one out of seven examples when asked to write in English, generating an article that echoed the official Chinese government position that the mass detention of Uyghur people in the country was actually a vocational and educational program…
Featured News
Federal Reserve Greenlights Capital One’s $35.3 Billion Acquisition of Discover
Apr 18, 2025 by
CPI
Google to Appeal Partial Ruling in DOJ Antitrust Case
Apr 18, 2025 by
CPI
Indian Ad Agencies Warned Against WhatsApp Discussions After Antitrust Raids
Apr 17, 2025 by
CPI
US Court Ruling Against Google Spurs Fresh Antitrust Tensions in Europe
Apr 17, 2025 by
CPI
AstraZeneca Accused of Stifling Biosimilar Competition for Rare Disease Drug
Apr 17, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – The Airline Industry
Apr 10, 2025 by
CPI
Boosting Competition in International Aviation
Apr 10, 2025 by
Jeffrey N. Shane
Reshaping Competition Policy for the U.S. Airline Industry
Apr 10, 2025 by
Diana L. Moss
Algorithmic Collusion in the Skies: The Role of AI in Shaping Airline Competition
Apr 10, 2025 by
Qi Ge, Myongjin Kim & Nicholas Rupp
Competition in U.S. Airline Markets: Major Developments and Economic Insights
Apr 10, 2025 by
Germán Bet