By: David Coldewey (TechCrunch)
The use of AI is still a work in progress, and caution should be exercised regarding its potential to confidently provide incorrect information. However, it appears that certain languages are more susceptible to this than others.
The reason for this was recently investigated by NewsGuard, a misinformation watchdog group, who discovered that ChatGPT produces more inaccurate information in Chinese dialects than in English.
They conducted tests by prompting the language model to write news articles about various false claims attributed to the Chinese government, such as the notion that the protests in Hong Kong were orchestrated by U.S.-associated agents provocateurs.
ChatGPT only complied with one out of seven examples when asked to write in English, generating an article that echoed the official Chinese government position that the mass detention of Uyghur people in the country was actually a vocational and educational program…
Featured News
T-Mobile Faces Class-Action Lawsuit Over Sprint Merger After Appeal Denied
May 16, 2024 by
CPI
Google Faces Backlash Over Introduction of AI-Generated Summaries in Searches
May 16, 2024 by
CPI
CMA Launches Phase 2 Probe into AlphaTheta’s Acquisition of Serato
May 16, 2024 by
CPI
NFL Executive Escapes Testifying in High-Stakes Trial Over Televised Games
May 16, 2024 by
CPI
EU Consumers Lodge Complaint Against Chinese Retailer Temu Over Content Rules Breach
May 16, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Ecosystems
May 9, 2024 by
CPI
Mapping Antitrust onto Digital Ecosystems
May 9, 2024 by
CPI
Ecosystems and Competition Law: A Law and Political Economy Approach
May 9, 2024 by
CPI
Ecosystem Theories of Harm: What is Beyond the Buzzword?
May 9, 2024 by
CPI
Open Ecosystems: Benefits, Challenges, and Implications for Antitrust
May 9, 2024 by
CPI