AI’s Commercialization Puts Enterprise Security Under the Microscope

Every technological advancement — including generative artificial intelligence — brings with it its own, updated variety of adversarial threats.

“There are so many ways AI models can behave that they move beyond the binary of fraud and not fraud,” Kojin Oshiba, co-founder of end-to-end AI security platform Robust Intelligence, told PYMNTS during a conversation for the “AI Effect” series.

The generative nature of modern AI systems, relative to the predictive machine learning programs of yesteryear, makes today’s AI harder to control and could potentially lead to greater security vulnerabilities, Oshiba explained.

“There are infinite patterns that can come out of these models, and a lot of their applications are in real time,” he added. “… That represents a huge difference for risk management.”

AI is one of the only technologies that can run afoul of an organization’s entire list of compliance protocols in a single click.

Complicating — or compounding — matters is the fact that generative AI tools have become fully commercialized over the past year, allowing individuals without extensive knowledge of AI risks to use them and further emphasizing the need for rigorous security measures.

“Traditional ML was typically the realm of PhDs or well-trained data scientists, but everyone can start using generative AI just by signing up,” said Oshiba.

Read also: NIST Says Defending AI Systems From Cyberattacks ‘Hasn’t Been Solved’

The Growing Importance of AI Security in Payments and Finance

Because AI represents both a dynamic attack vector for bad actors and a key value-add opportunity for enterprises, firms must double down on technical readiness, data readiness and resource readiness when looking to deploy AI safely and responsibly.

Oshiba rated most companies as having a low readiness level right now, while noting that progress is being made, particularly as firms look to bridge the gap between internal and external use cases.

“If you think about a bank, they can have their own people summarize certain reports or write email responses, and that can be beneficial for automating and making internal processes efficient,” he said. “But if you think about end-users and supporting customer success, or helping with financial knowledge sharing — these are all use cases that can be very good, but there are a lot more risks that you need to overcome in order to responsibly do that.”

As more enterprises move from consideration to adoption and eventually deployment of AI, Oshiba identified two urgent themes regarding AI security: data leakage and supply chain risks.

The generative nature of AI allows for connections to diverse data sources, increasing the risk of unintended data leakage and underscoring the need to protect sensitive data and prevent data extraction attacks, he explained.

Additionally, just as with all loose-end vendor relationships, organizations may not have visibility into the safety and security measures implemented by AI vendors and open-source models, leaving them open to more security risks and points of failure or penetration.

“If you’re a bank, traditionally you’d have IT risk checks across your software vendor supply chain, but that’s harder to perform on AI vendors and what they are using behind the scenes,” Oshiba said. “So, there is risk there, when the AI models are essentially coming from unknown sources.”

He stressed the importance of ensuring a secure supply chain for AI applications.

Establishing End-to-end Security for AI Systems

Many companies rely on manual processes to probe and test AI models, which is time-consuming, often insufficient, and at odds with the advances in departmental efficiency promised by AI itself.

“There’s a difference that we see between cybersecurity and AI security,” Oshiba said. “CISOs know the different components of cybersecurity, like database security, network security, email security, etc., and for each, they are able to have a solution. But with AI, the components of AI security and what needs to be done for each isn’t widely known. The landscape of risks and solutions needed is unclear.”

He added that Robust Intelligence works closely with organizations like the U.S. National Institute of Standards and Technology (NIST) and the non-profit MITRE to aggregate knowledge and create AI safety standards that can be referenced by all organizations as they build more secure and trustworthy AI models.

Looking ahead, Oshiba emphasized the need for automation in AI security especially around testing, red teaming and protecting AI models in real time. He also highlights the need to scale AI security measures alongside the increasing deployment of AI applications.

Organizations must prioritize AI security, Oshiba explained, in part by bridging the gap between internal and external use cases, and work toward establishing standardized frameworks to ensure the safe and secure deployment of AI in the payments industry — one that first defines an approach, and then streamlines it.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.