Report: Nearly Half of HR Execs Want to Police ChatGPT

OpenAI

Human resource leaders want to regulate OpenAI’s ChatGPT, though those regulations could vary among companies.

That’s according to a Monday (March 20) report by Bloomberg News, which cites research from the consulting firm Gartner.

Garnter’s work found that 48% of all HR execs surveyed said they were in the process of coming up with policies on employees’ use of the artificial intelligence (AI) chatbot.

“They’re probably questioning how much guidance, which roles will potentially use it or will not be able to use it, and if they should completely ban it or not,” Gartner Senior Director Analyst Eser Rizaoglu told Bloomberg.

“A lot of leaders are working with IT, legal, compliance and auditing to understand: What are the risks, what are the potential impacts? And then how do we take an approach accordingly?”

The survey also found that nearly a third of HR leaders surveyed said they weren’t planning to police employees’ use of ChatGPT. Rizaoglu said that might be because the technology isn’t relevant to their company, or they think it’s just a fad.

Many organizations do not think ChatGPT/AI is just a fad.

“ChatGPT is going to be in everything,” General Motors Vice President Scott Miller said recently, in reference to his company’s plans to use the chatbot in its vehicles.

And earlier this month, the U.S. Chamber of Commerce — which posits a world in which “virtually every” business and government agency will use the technology — called on policymakers to come up with rules to make sure AI develops responsibly and ethically.

“A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies,” the Chamber said in the report, which warns that, as with “most disruptive technologies, these investments can both create and displace jobs.”

At the same time, the lobbying group acknowledges what it says is the promise of AI in promoting economic opportunity, increasing incomes and accelerating scientific research.

It’s why — as PYMNTS has noted — there has been an uptick in investments in the technology, such as include Microsoft’s January decision to make a $10 billion investment in Open AI, and Google’s $300 million collaboration with AI firm Anthropic (which itself is now taking part in its own $300 million funding round).

Yet concerns remain. A report Sunday (March 19), also by Bloomberg, argues that policymakers haven’t yet recognized the potential of AI to allow for things like mass surveillance or to endanger people.

“We need to regulate this, we need laws,” Janet Haven, executive director of Data & Society, a New York-based nonprofit research group, told Bloomberg. “The idea that tech companies get to build whatever they want and release it into the world and society scrambles to adjust and make way for that thing is backwards.”