Visa The Embedded Lending Opportunity April 2024 Banner

NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses

NSA, National Security Agency

The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, releasing new guidance to help businesses protect their AI from hackers. 

As AI increasingly integrates into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSA’s Cybersecurity Information Sheet provides insights into AI’s unique security challenges and offers steps companies can take to harden their defenses. 

“AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis,” NSA Cybersecurity Director Dave Luber said Monday (April 15) in a news release

Hardening Against Attacks

The report suggested that organizations using AI systems should put strong security measures in place to protect sensitive data and prevent misuse. Key measures include conducting ongoing compromise assessments, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring and limiting access to model weights. 

“AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,” Jon Clay, vice president of threat intelligence at the cybersecurity company Trend Microtold PYMNTS. “AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.” 

As reported by PYMNTS, AI is revolutionizing how security teams approach cyber threats by accelerating and streamlining their processes. Through its ability to analyze large datasets and identify complex patterns, AI automates the early stages of incident analysis, enabling security experts to start with a clear understanding of the situation and respond more quickly.

Cybercrime continues to rise with the increasing embrace of a connected global economy. According to an FBI report, the U.S. alone saw cyberattack losses exceed $10.3 billion in 2022. 

Why AI is Vulnerable to Attacks

AI systems are particularly prone to attacks due to their dependency on data for training models, according to Clay. 

“Since AI and machine learning depend on providing and training data to build their models, compromising that data is an obvious way for bad actors to poison AI/ML systems,” Clay said.

He emphasized the risks of these hacks, explaining that they can lead to stolen confidential data, harmful commands being inserted and biased results. These issues could upset users and even lead to legal problems.

Clay also pointed out the challenges in detecting vulnerabilities in AI systems.

“It can be difficult to identify how they process inputs and make decisions, making vulnerabilities harder to detect,” he said.

He noted that hackers are looking for ways to get around AI security to change its results, and this method is being talked about more in secret online forums.

When asked about measures businesses can implement to enhance AI security, Clay emphasized the necessity of a proactive approach.

“It’s unrealistic to ban AI outright, but organizations need to be able to manage and regulate it,” he said. 

Clay recommended adopting zero-trust security models and using AI to enhance safety measures. This method means AI can help analyze emotions and tones in communications and check web pages to stop fraud. He also stressed the importance of strict access rules and multi-factor authentication to protect AI systems from unauthorized access.

As businesses embrace AI for enhanced efficiency and innovation, they also expose themselves to new vulnerabilities, Malcolm Harkins, chief security and trust officer at the cybersecurity firm HiddenLayer, told PYMNTS. 

“AI was the most vulnerable technology deployed in production systems because it was vulnerable at multiple levels,” Harkins added. 

Harkins advised businesses to take proactive measures, such as implementing purpose-built security solutions, regularly assessing AI models’ robustness, continuous monitoring and developing comprehensive incident response plans. 

“If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for extended periods, creating the potential for more extensive damage,” Harkins said.