
By: Ronald Crelinsten (Center for International Governance Innovation)
As with any emerging technology, generative artificial intelligence (AI) models present a dual nature. They offer innovative solutions to existing problems and challenges, but they can also enable harmful activities such as revenge porn, sextortion, disinformation, discrimination, and violent extremism. Worries have grown regarding AI “going rogue” or being misused in inappropriate or unethical manners. As aptly noted by Marie Lamensch, “generative AI creates images, text, audio, and video based on word prompts,” thereby exerting a wide-ranging influence on digital content.
Stochastic Parrots
The term “stochastic parrot” was coined by Emily M. Bender, Timnit Gebru, and their colleagues to describe AI language models (LMs). They defined an LM as a system that haphazardly strings together sequences of linguistic forms observed in extensive training data, guided by probabilistic information on their combinations, yet devoid of any understanding of meaning. These models essentially mimic statistical patterns gleaned from large datasets rather than comprehending the language they process.
Bender and her team shed light on some detrimental consequences, pointing out that prevailing practices in acquiring, processing, and filtering training data inadvertently favor dominant viewpoints. By considering vast amounts of web text as universally representative, there’s a risk of perpetuating power imbalances and reinforcing inequality. Large LMs can generate copious amounts of coherent text on demand, allowing malicious actors to exploit this fluency and coherence to deceive individuals into perceiving the content as “truthful.” Lamensch contends that without appropriate filters and mitigation strategies, generative AI tools absorb and replicate flawed, sometimes unethical, data.
In specific terms, generative AI models are trained on a limited dataset often rife with misogyny, racism, homophobia, and a male-centric perspective. A persistent gender gap exists in internet and digital tool usage, as well as in digital skills, with women less likely to engage with such tools or develop related skills, particularly in less developed countries. Women who do participate online often become targets of sexualized online abuse more frequently than men. This represents the darker facet of AI, whereby generative AI models reinforce harmful stereotypes, mirroring the biases and ideologies embedded in their source material…
Featured News
Biden Administration Unveils Measures to Tackle Healthcare Costs Through Competition
Dec 7, 2023 by
CPI
Australia’s to Probe Coles and Woolworths for Alleged Price Gouging
Dec 7, 2023 by
CPI
D.C. Attorney General Pushes to Revive Suit Accusing Amazon of Price-Fixing
Dec 7, 2023 by
CPI
Google Withdraws Appeal, Opening the Door for Indian Startups Against User Choice Billing System
Dec 7, 2023 by
CPI
U.S. Congress Delays Legislation on TikTok Amid National Security Concerns
Dec 7, 2023 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Horizontal Competition: Mergers, Innovation & New Guidelines
Nov 30, 2023 by
CPI
Innovation in Merger Control
Nov 30, 2023 by
CPI
Making Sense of EU Merger Control: The Need for Limiting Principles
Nov 30, 2023 by
CPI
Sustainability Agreements in the EU: New Paths to Competition Law Compliance
Nov 30, 2023 by
CPI
Merger Control and Sustainability: A New Dawn or Nothing New Under the Sun?
Nov 30, 2023 by
CPI