Big Tech’s ‘Frontier Model Forum’ Tackles ‘Responsible AI’

Why Companies Must Take AI Implications Seriously

Four Big Tech companies have formed a group focused on responsible artificial intelligence (AI) development.

The Frontier Model Forum, comprised of GoogleMicrosoftOpenAI and Anthropic, “will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem,” Google said in a blog post Wednesday (July 26).

That means “advancing technical evaluations and benchmarks and developing a public library of solutions to support industry best practices and standards,” according to the post.

The announcement drew criticism from AI skeptics, who charge that it’s a way for the companies to avoid stronger government regulations, the Financial Times reported Wednesday.

Emily Bender, a University of Washington computational linguist and expert in large language models, argued per the report that focusing on the fear “machines will come alive and take over” distracts people from “the actual problems we have to do with data theft, surveillance and putting everyone in the gig economy.”

“The regulation needs to come externally,” Bender added in the report. “It needs to be enacted by the government representing the people to constrain what these corporations can do.”

Last week, the four companies in question — along with AmazonMeta and AI firm Inflection — signed a commitment with the White House to promote the safe, secure and transparent development of AI.

These commitments include measures designed to better understand the risk and ethical implications of the technology while guarding against misuse.

That means a call for the companies to test AI systems — in-house and externally — before their release, as well as agreeing to share information about managing AI risks across the industry and with governments, academia and civil society.

Firms also pledge to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

But as PYMNTS wrote last week, observers point out that many of the practices the companies agreed to already existed at AI firms and don’t add up to new regulation. The pledge to self-regulate also drew criticism from consumer groups, such as the Electronic Privacy Information Center (EPIC).