Microsoft Expands AI Efforts With Security Copilot

Microsoft

Microsoft is continuing its artificial intelligence efforts with the debut of its Security Copilot chatbot.

Announced Tuesday (March 28) on the company blog, the chatbot is designed to give businesses a “much-needed tool to quickly detect and respond to threats and better understand the threat landscape overall.”

Writing on the blog, Microsoft Security Vice President Vasu Jakkal said the chatbot — an outgrowth of the company’s work with OpenAI and its GPT-4 large language model — gives corporate security teams an edge in monitoring threats from cyberattackers.

“The volume and velocity of attacks requires us to continually create new technologies that can tip the scales in favor of defenders,” Jakkal wrote, noting there’s a global shortage of security professionals, with 3.4 million job openings.

“Security professionals are scarce, and we must empower them to disrupt attackers’ traditional advantages and drive innovation for their organizations.”

The tool combines “this advanced large language model (LLM) with a security-specific model from Microsoft,” incorporating “a growing set of security-specific skills” and is informed by Microsoft’s “global threat intelligence and more than 65 trillion daily signals,” Jakkal wrote.

The chatbot isn’t perfect, he acknowledged. But because it’s a closed-loop learning system, it’s constantly learning from users and letting them offer feedback, letting Microsoft learn from those interactions and adjust responses to offer more useful answers.

As PYMNTS wrote last week, the efforts of tech giants like Microsoft and Apple to compete for the AI market have been hindered somewhat as “certain flaws in the application of their large language model (LLM) trained chatbots are increasingly rearing their embedded heads.”

For example, OpenAI last week was forced to briefly shut down its ChatGPT-4 AI interface after the chatbot released users’ search histories, a grievous data-privacy error.

And Google’s debut of its Bard chatbot washed away close to $100 billion in shareholder value and sent the company’s down 8% after the AI solution gave an incorrect answer during its first-look presentation.

It was a moment that underscored “the inherent unreliability of certain LLMs trained on datasets that themselves contain potentially misleading or incorrect information,” PYMNTS wrote.

Nonetheless, Microsoft, Google and Amazon all apparently see generative AI as a driver for their cloud-computing businesses, making the technology a star player in their sales pitches.

And this uptick in focus on AI has apparently gotten the attention of the Federal Trade Commission, which is working to make sure the AI sector isn’t monopolized by Big Tech and that the companies aren’t overstating claims about the technology’s capabilities.