Keeping the Baby While Losing the Bathwater: AI’s Efficiencies and Concerns Collide

AI security

The generative artificial intelligence (AI) revolution is already generating efficiencies across industries.

But the rapid advance of the groundbreaking technology has also sparked concern from institutions as varied as the giant tech companies behind today’s large language model (LLM) platforms to the Security Council of the United Nations (UN).

This, as a new report from Carnegie Mellon University and the Center for AI Safety, titled “Universal and Transferable Adversarial Attacks on Aligned Language Models” and published Thursday (July 27), reveals that there exist a variety of ways to circumvent the safety measures of all the major AI platforms, including Google’s Bard, OpenAI’s ChatGPT, and Anthropic’s Claude programs.

“We demonstrate that it is in fact possible to automatically construct adversarial attacks on LLMs, specifically chosen sequences of characters that, when appended to a user query, will cause the system to obey user commands even if it produces harmful content,” the paper stated.

“Unlike traditional jailbreaks, these are built in an entirely automated fashion, allowing one to create a virtually unlimited number of such attack,” added the researchers.

The researchers behind the study emphasize that there is no known way of systematically stopping all attacks of this kind, meaning that stopping all misuse of AI models will be extraordinarily difficult.

But that doesn’t mean it’s time — just a handful of months after AI’s commercialization — to throw the baby out with the bathwater.

Enterprise organizations still have the most to gain by “unlocking” AI’s potential across their workflows and back-end processes.

That vulnerabilities exist within generative AI does not disqualify the technology from widespread adoption — rather, it only underscores the need, as with all enterprise software integrations, for a careful and intelligent application of the innovative technology’s change-the-game capabilities.

Read more: Generative vs Predictive AI’s Role Across the Future of Payments

Accelerating Business Agility and Process Effectiveness

While adversarial attacks may be able to unlock and abuse AI’s vast repertories of knowledge, so too can firms positively unlock the tech’s potential when looking to improve any process that has large data volumes and is complex.

AI-led personalization can drive better performance for businesses, both in terms of top-line revenue and bottom-line results, Michael Affronti, SVP, GM of Commerce Cloud at Salesforce, told PYMNTS.

That’s because generative AI platforms continuously learn, refine, and optimize the delivery of their solutions, ensuring that engagements are optimized relative to a confidence band built on past behavioral trends anchored to reinforced positive outputs.

Still, as with the use and integration of any AI tool, secure data management protocols and bulletproof workflows are necessary for AI use to be leveraged responsibly.

AI solutions are already, and have been for years, helping payment firms with regulatory compliance around know your customer (KYC) and anti-money laundering (AML) controls by enacting cost-effective and non-manual decisioning processes that are increasingly auditable and do not sacrifice security for convenience.

The use of AI has been “really big” for fraud prevention and automating authorizations, Andrew Gleiser, chief revenue officer at payments provider Aeropay, told PYMNTS.

But just as data provenance and privacy are not things to overlook when deploying AI solutions, nor are the emergent technical avenues for misuse and abuse.

Read also: What’s Missing from America’s AI Safety Pledge? Any Mention of EU

The Potential for Abuse Is Nothing New

As the research paper notes, “analogous adversarial attacks have proven to be a very difficult problem to address in computer vision for the past 10 years. It is possible that the very nature of deep learning models makes such threats inevitable.”

But the abuse of advanced technical systems is nothing new for sophisticated organizations well aware that for every 12-foot wall they build, attackers are hard at work constructing a 13-foot ladder.

Importantly, the research means that firms looking to leverage LLMs to build synthetic customer service assistants need to be aware of their potential to be hijacked for mal intent.

Research in “Payments Security Amid Uncertainty: Fighting Fraud And Crime With Digital Innovation Playbook,” a PYMNTS collaboration with Citi, details how firms can pinpoint vulnerabilities and strengthen security to better position themselves.

For organizations looking to use generative AI for internal purposes like surfacing previously shelved data, it is unlikely that jailbreaking the AI model being used is a concern.

Still, the degree and seriousness of AI’s inherent vulnerabilities may help to inform any government legislation designed to control the innovative new systems.

For now, as businesses continue to seek ways to drive efficiency and improve their bottom line, AI technology offers a promising solution — and as with any solution, it’s important to be careful and intentional when applying it.