“The use of AI has the potential to reduce costs and increase efficiencies; improve products, services and performance; strengthen risk management and controls; and expand access to credit and other bank services,” Hsu said in the speech.
Among the core challenges is alignment, he said, as AI systems — typically based on neural networks — are not programmed explicitly like most software, require training, and have unpredictable outputs.
“While this is part of their magic, it also creates a fundamental problem,” Hsu said in the speech. “[S]ince AI systems are built to ‘learn,’ they may or may not do what we want or behave consistent with our values.”
And the alignment issue, he added, creates a governance and accountability problem.
“The more an AI system learns, the further it gets from its initial programming,” he said in the speech. “This creates ‘opportunities for plausible deniability’ should things go wrong.”
“Banks and regulators must also grapple with generative AI’s capacity for enabling fraud and the spread of misinformation,” Hsu added, noting an uptick in several types of fraud, including synthetic identity and synthetic media fraud.
“The ability of AI agents to mimic human communication and the low cost of scaling AI agents increase opportunities for fraud,” he said in the speech. “The speed and sophistication of such developments warrant close monitoring and coordination.”
“We continue to see a hockey stick increase in digital identity information being compromised and used for synthetic identity fraud, account takeover fraud and other types of digital identity abuse,” said Dietrich at the time.
With 42% of consumers saying they want to confirm their identity every time they pay for a good or service, tapping AI to power enterprise-level digital identity verification has become crucial to helping companies guard against today’s variants of digital fraud.