Businesses Need to Chart Own Policy Path Around Using AI

Generative artificial intelligence (AI) has changed the game so fast that companies and governments alike are trying to play catch-up.

That is because the innovation is one of the first technologies that can violate nearly all of an organization’s internal corporate policies in one fell swoop, while for governments generative AI holds an “enormous potential for good and evil at scale” that is “deeply alarming.”

This, as U.S. Senate Majority Leader Chuck Schumer will reportedly host a series of AI insight forums with a number of tech leaders, as well as civil rights and labor groups, and representatives from the creative community.

The first meeting, which is scheduled to take place in two weeks, on Sept. 13, will include Meta Platforms CEO Mark Zuckerberg, Google CEO Sundar Pichai, NVIDIA CEO Jensen Huang, OpenAI CEO Sam Altman, X owner Elon Musk, Microsoft Co-founder Bill Gates, Microsoft CEO Satya Nadella and former Google CEO Eric Schmidt.

It comes as China remains the first major global market economy to establish a policy framework regulating the use of generative AI.

In the Western world, the European Union (EU) has traditionally played the leading role in shaping the regulatory discussion around tech’s biggest innovations, including generative AI.

EU regulation is frequently built off policy frameworks that adhere to the rights-driven approach popularized by its existing tech-focused digital regulation policies.

The U.S., for its part, has not passed any new laws on a federal level aimed at the tech sector in decades, preferring instead to focus on commercializing and not policing innovations like generative AI.

As businesses try to tackle how to establish their own corporate generative AI policies, they will look to the details of how Beijing, Brussels, and Washington are regulating the technology for clues to inform their own approach.

Read also: From PopeGPT to the Pentagon: All Eyes on Gen AI Oversight

Putting a Stake in the Ground 

The proliferation of data-driven business ecosystems, powered by a hyper-rapid growth in the capabilities of today’s computing power and cloud storage infrastructure, has helped push generative AI to the forefront of both organizations’ growth plans as well as their legal departments’ internal memos.

PYMNTS research has found that many companies are unsure of where they stand in regard to generative AI, but they still feel a pressing need to adopt it.

Sixty-two percent of surveyed executives do not believe their companies have the expertise to employ the technology effectively, according to “Understanding the Future of Generative AI,” a PYMNTS and AI-ID collaboration.

Part of that uncertainty is due to the fact that, as a result of generative AI’s far-reaching potential, there is no silver bullet for how to govern the use of the technology and no easy-choice decision for which department or team should take the lead with developing and implementing compliance decisions.

“I don’t think that we can expect any one single institution to have the kind of knowledge and capacity to address the varied problems [of AI regulation],” Cary Coglianese, founding director of the Penn Program on Regulation, told PYMNTS. “If there was an equivalent of a seat belt that we could require be installed with every AI tool, great. But there isn’t a one-size-fits-all action that can be applied [to regulating AI].”

“[Overseeing AI] relies on technical standards that will have to be developed to implement it,” Dr. Johann Laux told PYMNTS in a separate conversation this week. “If you want to audit AI systems, we need an audit industry to emerge.”

Read more: 10 Insiders on Generative AI’s Impact Across the Enterprise

Taking a Dynamic Approach to a Dynamic Technology

Topping the list of business concerns around integrating generative AI into sensitive and critical enterprise workflows is the ability to protect confidential business, customer and even personal user data.

That’s because information put into a generative AI platform, or foundational LLM (large language model), could potentially be later incorporated into the data used to train further iterations of that same generative AI model and inadvertently made public.

This, as AI provider OpenAI on Monday (Aug. 28) unveiled ChatGPT Enterprise, a new generative AI tool designed explicitly to address these concerns by providing enterprise-grade security and privacy.

As PYMNTS reported, a standout feature of ChatGPT Enterprise is its commitment to data ownership and control, with the product being purpose-built to ensure that businesses retain full ownership of their data, with no usage of customer data for training purposes. Moreover, ChatGPT Enterprise is SOC 2 compliant, with the highest standards of security.

Still, inaccuracy and model hallucination remain major concerns, and to date are unsolved problems around integrating generative AI models into corporate processes.

Companies creating AI policies will need to address this baked-in risk of the technology by weighing risks and benefits and ensuring that a human always remains in the loop for verification and fact-checking purposes.