Visa The Embedded Lending Opportunity April 2024 Banner

Report: Microsoft Reverses Temporary Block on Employee Access to ChatGPT

Microsoft

Microsoft reportedly blocked employee access to ChatGPT, the artificial intelligence (AI)-powered chatbot developed by OpenAI and backed by Microsoft, for about an hour on Thursday (Nov. 9).

The decision was made by the IT department due to security concerns, the Wall Street Journal (WSJ) reported Thursday, citing unnamed sources.

However, the move caught management off guard. The restriction was quickly reversed and access to ChatGPT was restored, according to the report.

Microsoft did not immediately reply to PYMNTS’ request for comment.

The temporary block was an error, the company told the WSJ. Microsoft encourages employees and customers to utilize Bing Chat Enterprise and ChatGPT Enterprise, which are enterprise versions that offer greater privacy and security measures than their consumer counterparts. They ensure that user data remains restricted to company devices and separate from other users’ data.

This incident sheds light on the growing apprehension surrounding AI services such as ChatGPT, leading several companies to impose restrictions on their use, according to the report. Apple, for instance, has limited the use of similar AI tools as it develops its own proprietary technology. JPMorgan Chase and Verizon have also blocked internal access to ChatGPT.

An internal Microsoft post related to the incident revealed that several other AI tools were also no longer available to employees, the report said. These include Bing Chat, which employs the same underlying technology as ChatGPT.

Microsoft has invested $13 billion in OpenAI, the creator of ChatGPT, per the report.

It was reported in February that JPMorgan Chase restricted its global staff’s use of ChatGPT due to compliance concerns around the use of third-party software. Because they deal with sensitive data and and must comply with regulations, banks must “tread carefully” around technology like this, CNN reported at the time. 

In March, a nonprofit group, the Center for AI and Digital Policy (CAIDP), filed a complaint with the Federal Trade Commission (FTC), asking it to investigate OpenAI and put a halt to its development of large language models for commercial purposes. The group said OpenAI’s GPT-4 did not satisfy the FTC’s requirements that the use of AI be “transparent, explainable, fair and empirically sound while fostering accountability.”