
By: Mardi Witzel & Niraj Bhargava (Center for International Governance Innovation)
Let’s say you work for a bank that uses automated systems to make decisions about loan applications, or hiring, or internal promotion. These systems include machine machine-learning tools designed according to a set of criteria, trained on historical data sets, then freed to do their mysterious work. Maybe you personally were passed over for a promotion.
Now, imagine that sometime later, you learn that the artificial intelligence (AI) making this decision was flawed. Perhaps the data used to train it was biased, or the model was poorly designed. Maybe the system “drifted,” as machine-learning models are known to do (drift happens when a model’s predictive power decays over time due to changes in the real world). It’s one thing to get turned down by a human you can challenge. But there’s much grey area with AI. It isn’t always possible to see how decisions are made.
This truth underlies the widespread call for trustworthy AI — that is to say, for transparency, fairness and accountability in the development and use of AI solutions. Despite the great promise of these tools, the risk of negative outcomes is not far-fetched. AI bias is documented and real. This is why it’s time for organizations to get serious about taking concrete steps toward effective AI governance.
Indeed, there are hard costs to AI done badly — including fines, litigation and settlement charges. Unsurprisingly, legislation has been proposed in the European Union and Canada that will impose massive penalties for breach of the rules around AI development and use. Companies have already experienced the hard costs of data breaches: for example, Capital One was fined US$80 million for its 2018 data breach and settled customer lawsuits for US$190 million. AI-related infractions will be similarly costly. And beyond the hard costs, soft ones — such as business distraction, loss of confidence and reputational damage — have even greater potential to damage organizations that do AI badly…
Featured News
U.K. Parliament Rejects Copyright Measure in Data Bill
May 12, 2025 by
CPI
Top Australian Law Firms Target ACCC Talent Ahead of Major Merger Reforms
May 11, 2025 by
CPI
What the Google Antitrust Trial Has Revealed So Far
May 11, 2025 by
CPI
Hamlin Remains Confident in 23XI, Front Row Antitrust Case Against NASCAR
May 11, 2025 by
CPI
Google Faces €2.97 Billion Lawsuit in Italy Over Alleged Market Abuse
May 11, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Mergers in Digital Markets
Apr 21, 2025 by
CPI
Catching a Killer? Six “Genetic Markers” to Assess Nascent Competitor Acquisitions
Apr 21, 2025 by
John Taladay & Christine Ryu-Naya
Digital Decoded: Is There More Scope for Digital Mergers In 2025?
Apr 21, 2025 by
Colin Raftery, Michele Davis, Sarah Jensen & Martin Dickson
AI In the Mix – An Ever-Evolving Approach to Jurisdiction Over Digital Mergers in Europe
Apr 21, 2025 by
Ingrid Vandenborre & Ketevan Zukakishvili
Antitrust Enforcement Errors Due to a Failure to Understand Organizational Capabilities and Dynamic Competition
Apr 21, 2025 by
Magdalena Kuyterink & David J. Teece