By: David Coldewey (TechCrunch)
The use of AI is still a work in progress, and caution should be exercised regarding its potential to confidently provide incorrect information. However, it appears that certain languages are more susceptible to this than others.
The reason for this was recently investigated by NewsGuard, a misinformation watchdog group, who discovered that ChatGPT produces more inaccurate information in Chinese dialects than in English.
They conducted tests by prompting the language model to write news articles about various false claims attributed to the Chinese government, such as the notion that the protests in Hong Kong were orchestrated by U.S.-associated agents provocateurs.
ChatGPT only complied with one out of seven examples when asked to write in English, generating an article that echoed the official Chinese government position that the mass detention of Uyghur people in the country was actually a vocational and educational program…
Featured News
Redfin Settles $9.2M Commission Inflation Lawsuits
May 7, 2024 by
CPI
DOJ Supports Colorado’s Efforts to Block Kroger-Albertsons Merger
May 7, 2024 by
CPI
Japan Considers Regulation of AI Developers
May 7, 2024 by
CPI
European Commission Extends Decision Deadline for Ita-Lufthansa Merger
May 7, 2024 by
CPI
UK, US and Australia Sanction Senior Leader of LockBit Cybercrime Gang
May 7, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Economics of Criminal Antitrust
Apr 19, 2024 by
CPI
Navigating Economic Expert Work in Criminal Antitrust Litigation
Apr 19, 2024 by
CPI
The Increased Importance of Economics in Cartel Cases
Apr 19, 2024 by
CPI
A Law and Economics Analysis of the Antitrust Treatment of Physician Collective Price Agreements
Apr 19, 2024 by
CPI
Information Exchange In Criminal Antitrust Cases: How Economic Testimony Can Tip The Scales
Apr 19, 2024 by
CPI