Wall Street banking giants have reportedly begun warning investors about risks stemming from AI use.
As Bloomberg News reported Wednesday (March 12), those risks include so-called artificial intelligence (AI) “hallucinations,” use of the technology by cybercriminals and its effect on employee morale.
For example, the report said, JPMorgan said in a recent regulatory filing that AI could bring about “workforce displacement” that could affect worker morale and retention, while increasing competition for employees with the appropriate tech background.
Bloomberg notes that while banks have in recent years been pointing to AI risks in their annual reports, new concerns are emerging as the financial world embraces the technology. It’s a balancing act: keeping on top of the latest AI advancements to retain customers, while also dealing with the threat of cybercrime.
“Having those right governing mechanisms in place to ensure that AI is being deployed in a way that’s safe, fair and secure — that simply cannot be overlooked,” Ben Shorten, finance, risk and compliance lead for banking and capital markets in North America at Accenture, said in an interview. “This is not a plug-and-play technology.”
The Bloomberg report adds that banks are at risk of using technologies that may be built on outdated, biased or inaccurate financial data sets.
Citigroup said that as it introduces generative AI at its company, it faces risks of analysts working with “ineffective, inadequate or faulty” results produced.
This data could also be incomplete, biased or inaccurate, which “could negatively impact its reputation, customers, clients, businesses or results of operations and financial condition,” the bank said in its 2024 annual report.
PYMNTS wrote recently about the use of AI in cybercrime, arguing that it helped add to a larger landscape of cyberattacks in 2024 that included ransomware, zero-day exploits and supply chain attacks.
“It is essentially an adversarial game; criminals are out to make money and the [business] community needs to curtail that activity. What’s different now is that both sides are armed with some really impressive technology,” Michael Shearer, chief solutions officer at Hawk, said in an interview with PYMNTS.
And last month, PYMNTS examined efforts by Amazon Web Services (AWS) to combat AI hallucinations using automated reasoning — a method rooted in centuries-old principles of logic.
The technique is a major leap in making AI outputs more reliable, which is particularly valuable for heavily regulated industries such finance and health care, AWS Director of Product Management Mike Miller said in an interview.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.