Visa The Embedded Lending Opportunity April 2024 Banner

Over 200 US AI Firms Join Government’s Consortium on Safety

Advances in technology have a track record of productively transforming society.

Many technological innovations, from the sewing machine to the automobile and the elevator, have flourished under industry standard-setting and governmental oversight designed with ethical guidelines, transparency and responsible deployment in mind.

Frequently, however, these policy frameworks came much later and were enacted after a technology’s impact had been assessed.

Seat belts, for example, weren’t mandatory equipment in cars until 1968.

But when it comes to the impact of artificial intelligence, governments around the world are increasingly looking to ensure responsible development and deployment of the technology on a more accelerated timeline, as concerns grow around the innovation’s far-flung capabilities and the potential impact of their misuse across work, politics, daily life and beyond.

The United States announced Thursday (Feb. 8) a new consortium to support the safe development and deployment of generative AI that will be supported by over 200 organizations, including academic institutions, leading AI firms, nonprofits, and other key players from within the burgeoning AI ecosystem.

The newly formed U.S. Artificial Intelligence Safety Institute Consortium (AISIC), created by the National Institute of Standards and Technology (NIST), is designed to fuel collaboration between industry and government to promote safe AI use, helping prepare the U.S. to address the capabilities of the next generation of AI systems with appropriate risk management strategies.

“AI is moving the world into very new territory,” Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio said in a statement. “And like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts. That is why NIST brings together these incredible collaborations of representatives from industry, academia, civil society and the government, all coming together to tackle challenges that are of national importance.”

At a press conference announcing the AISIC, Commerce Secretary Gina Raimondo emphasized that the work the safety institute is doing can’t be “done in a bubble separate from industry and what’s happening in the real world.”

See also: How AI Firms Plan to Build, Then Control, Superhuman Intelligence

AI Pioneers Continue to Lead the Charge

Among the more than 200 members of the AISIC, Adobe, OpenAI, Meta, Amazon, Palantir, Apple, Google, Anthropic, Salesforce, IBM, Boston Scientific, Databricks, Nvidia, Intel and many others represent the AI space, but they aren’t alone.

Financial institutions including Bank of America, J.P. Morgan Chase, Citigroup and Wells Fargo, and financial service firms including Mastercard, have also agreed to contribute their support to the safe and responsible development of the domestic AI industry.

“Progress and responsibility have to go hand in hand,” Meta President of Global Affairs Nick Clegg said in a statement. “Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI. We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”

Added IBM Chairman and CEO Arvind Krishna: “The new AI Safety Institute will play a critical role in ensuring that artificial intelligence made in the United States will be used responsibly and in ways people can trust. IBM is proud to support the institute through our AI technology and expertise, and we commend Secretary Raimondo and the administration for making responsible AI a national priority.”

Read also: NIST Says Defending AI Systems From Cyberattacks ‘Hasn’t Been Solved’

The NIST has been pushed to the forefront of the U.S. government’s approach to handling AI, having been charged by a White House executive order with developing domestic guidelines for the evaluation of AI models and red-teaming; facilitating the development of consensus-based standards; and providing testing environments for the evaluation of AI systems, among other duties.

PYMNTS Intelligence found that around 40% of executives believe there is an urgent necessity to adopt generative AI, and 84% of business leaders said they believe generative AI’s impact on the workforce will be positive.

“[AI] is the most likely general-purpose technology to lead to massive productivity growth,” Avi Goldfarb, Rotman chair in AI and healthcare and a professor of marketing at the Rotman School of Management, University of Toronto, told PYMNTS in an interview posted in December. “…The important thing to remember in all discussions around AI is that when we slow it down, we slow down the benefits of it, too.”

But the AISIC will have its work cut out for it. AI safety is a multipronged and many-headed beast.

“There’s a difference that we see between cybersecurity and AI security,” Kojin Oshiba, co-founder of end-to-end AI security platform Robust Intelligence, told PYMNTS in an interview posted in January. “CISOs know the different components of cybersecurity, like database security, network security, email security, etc., and for each, they are able to have a solution. But with AI, the components of AI security and what needs to be done for each isn’t widely known. The landscape of risks and solutions needed is unclear.”

By combining efforts and perspectives from the 200-plus ecosystem players supporting the AISIC, it becomes possible to create a more robust and responsible framework for the development and deployment of generative AI technologies.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.