
In a major move to regulate the booming Artificial Intelligence (AI) industry, seven of the top tech companies have made voluntary commitments to the White House, with a series of measures aimed at ensuring AI systems that are secure, safe and transparent.
According to Reuters, representatives from the companies, including OpenAI, Alphabet, Meta Platforms and Microsoft, recently concluded meetings with the Biden administration to hammer out a set of AI safeguards.
The move has drawn praise from President Joe Biden who, in a statement, said, ” We must be clear-eyed and vigilant about the threats emerging technologies can pose. Social media has shown us the harm that powerful technology can do without the right safeguards in place. These commitments are a promising step, but we have a lot more work to do together.”
The sweeping agreements, centering on safety, security and trust, include the companies investing in cybersecurity, external and internal testing of AI systems before their release, and developing a system to ‘watermark’ all AI-generated content.
The companies have also pledged to focus on protecting users’ privacy and ensuring that the technology is free from bias and not used to discriminate against vulnerable groups.
Read more: UN Security Council Holds First Formal Discussion On Artificial Intelligence
These voluntary commitments are especially important given that the US lags behind the European Union (EU) in tackling AI regulation. The EU has recently agreed on a set of draft rules which require AI systems to disclose AI-generated content and help distinguish deep-fake images from real ones. Microsoft president Brad Smith believes his company’s commitment goes beyond the White House pledge, including support for creating a ‘licensing regime for highly capable models.
Meanwhile, OpenAI chief executive Mustafa Suleyman recently concluded a global tour where he visited multiple countries to talk about the need for responsible AI. He believes that submitting to ‘red team’ tests that poke at their AI systems, while voluntary, is not an easy promise.
Apple CEO Tim Cook has also focused on the potential and dangers of AI, emphasizing the needfor regulation and companies to make their own ethical decisions for the technology.
Not surprisingly, the move has been met with some pushback, with critics like Amba Kak, executive director of the AI Now Institute voicing concern that the ‘closed-door’ deliberation with corporate actors is not enough to ensure the best safety and transparency for users.
AI technology is driving massive change in the world today, bringing with it both promise and peril. There is no doubt that these measures taken by the industry giants are an important step forward in rising to the challenge of low-stakes AI regulation. It is clear that the Biden-Harris Administration is taking AI very seriously and is determined to see these commitments through.
We can expect to see more tangible changes in the coming weeks, as the Administration continues to work on developing an executive order as well as bipartisan legislation, to ensure a safe and just future for the technology.
Source: Reuters
Featured News
Beijing Court Upholds Copyright in Landmark Decision on AI-Generated Images
Nov 30, 2023 by
CPI
Price-Fixing Scandal Rocks European Construction Giants in US Court
Nov 30, 2023 by
CPI
Google Ad Chief Jerry Dischler Steps Down Amid Antitrust Scrutiny
Nov 30, 2023 by
CPI
Meta’s Ad-Free Subscription Service Faces EU Legal Challenge
Nov 30, 2023 by
CPI
UK Court Empowers Antitrust Watchdog to Probe Apple’s Dominance
Nov 30, 2023 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Horizontal Competition: Mergers, Innovation & New Guidelines
Nov 30, 2023 by
CPI
Innovation in Merger Control
Nov 30, 2023 by
CPI
Making Sense of EU Merger Control: The Need for Limiting Principles
Nov 30, 2023 by
CPI
Sustainability Agreements in the EU: New Paths to Competition Law Compliance
Nov 30, 2023 by
CPI
Merger Control and Sustainability: A New Dawn or Nothing New Under the Sun?
Nov 30, 2023 by
CPI