As the Financial Times (FT) reported Sunday (May 11), this launch is happening as the insurance industry tries to capitalize on concerns about the risk of losses from AI chatbot errors or hallucinations.
The policies are offered through a startup called Armilla and will cover the cost of court claims against a business if it is sued by a customer or other third party harmed by an underperforming AI product, the report said.
As the FT noted, while companies have embraced AI to increase efficiency, some tools, such as customer service bots, have yielded embarrassing, costly mistakes due to hallucinations, or when an AI model makes things up but delivers this information with confidence.
As PYMNTS has written, the consequences of acting on hallucinated information can be severe, leading to flawed decisions, financial losses, and damage to a company’s reputation. There are also difficult questions surrounding accountability when AI systems are involved.
“If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?” asked Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, in an interview with PYMNTS last week.
In many cases, it’s the company behind the chatbot that takes the blame. For example, Virgin Money issued an apology earlier this year when its chatbot chastised a customer for using the word “virgin.” And Air Canada ended up in court last year when its chatbot fabricated a discount in a conversation with a customer.
According to the FT report, Armilla said the loss from selling the tickets at the discounted price would have been covered by its policy if Air Canada’s chatbot was found to have performed below expectations.
Meanwhile, PYMNTS explored Lloyds Bank’s in-house efforts to adopt AI amid worries about hallucinations in a report earlier this year.
“That was something we were quite concerned about, probably for the first 12 or 18 months,” Lloyds Bank Chief Data and Analytics Officer Ranil Boteju said during a Google roundtable discussion on AI.
The company decided that “until such time as we have confidence in the guardrails, we will not expose any of the generative AI capabilities directly to customers.”
At first, Lloyds focused on back-office efficiencies as the first use cases, or they had a human worker on hand to keep an eye on activities.