Report: UnitedHealth Has 1,000 AI Applications in Production

UnitedHealth Group is reportedly increasing its use of artificial intelligence across its business.

The company has 1,000 AI applications in production, The Wall Street Journal reported Monday (May 5). It uses the technology in its insurance, health delivery and pharmacy divisions.

UnitedHealth’s AI transcribes conversations from clinician visits, summarizes data, processes claims and controls customer-facing chatbots. In addition, roughly 20,000 of the company’s engineers use AI to write software, according to the report.

Half of these applications use generative AI and the other half employ a more traditional version of the technology, said Chief Digital and Technology Officer Sandeep Dadlani, per the report.

UnitedHealth’s push into the AI sector is happening as the company, and the industry at large, is facing increased scrutiny, the report said. When CEO Brian Thompson was shot and killed in New York City last year, it sparked a wave of outrage against health insurers. UnitedHealth is also facing a Department of Justice civil fraud probe related to its billing practices.

The company’s AI initiatives have drawn legal attention, according to the report. For example, a class action suit in 2023 accused UnitedHealth of using a flawed AI algorithm to evaluate and unfairly deny claims. A federal judge in February permitted the lawsuit to proceed but threw out five out of seven counts.

Meanwhile, the use of chatbots in healthcare has sparked debate about the effectiveness and reliability of AI.

Last year, the World Health Organization introduced an AI health assistant, although there were some questions about its accuracy. Experts said chatbots could have an impact on the healthcare business, but their varying levels of accuracy raise crucial questions about their potential to support or hinder patient care.

“Like other AI-powered tools, medical chatbots are more likely to provide highly accurate answers when thoroughly trained on high-quality, diverse datasets and when user prompts are clear and simple,” Julie McGuire, managing director of the BDO Center for Healthcare Excellence & Innovation, told PYMNTS in April 2024. “However, when questions are more complicated or unusual, a medical chatbot may provide insufficient or incorrect answers. In some cases, a generative AI-powered medical chatbot could make up a study to justify a medical answer it wants to give.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.