The Chatbot Will See You Now: Medical Experts Debate the Rise of AI Healthcare

AI healthcare chatbot

As artificial intelligence (AI)-powered chatbots become increasingly common in healthcare, questions about their effectiveness and reliability continue to spark debate.

The World Health Organization (WHO) has introduced an AI health assistant, but recent reports say it’s not always accurate. Experts say health chatbots could have a big impact on the healthcare business, but their varying levels of accuracy raise critical questions about their potential to support or undermine patient care.

“Like other AI-powered tools, medical chatbots are more likely to provide highly accurate answers when thoroughly trained on high-quality, diverse data sets and when user prompts are clear and simple,” Julie McGuire, managing director of the BDO Center for Healthcare Excellence & Innovation, told PYMNTS. “However, when questions are more complicated or unusual, a medical chatbot may provide insufficient or incorrect answers. In some cases, a generative AI-powered medical chatbot could make up a study to justify a medical answer it wants to give.”

Rise of the Chatbot Medic

The WHO’s new tool, the Smart AI Resource Assistant for Health, or Sarah, has encountered issues since its launch. The AI-powered chatbot offers health-related advice in eight languages, covering subjects such as healthy eating, mental health, cancer, heart disease and diabetes. Developed by the New Zealand company Soul Machines, Sarah also incorporates facial recognition technology to provide more empathetic responses.

Although the WHO states that the AI bot is updated with the latest information from the organization and its trusted partners, Bloomberg recently reported that it fails to include the most current U.S.-based medical advisories and news events.

The World Health Organization did not immediately respond to a request for comment by PYMNTS.

AI chatbots for health are becoming increasingly common. For instance, Babylon Health’s chatbot can evaluate symptoms and provide medical advice, guiding patients on whether to consult a doctor. Sensely’s chatbot, equipped with an avatar, helps users navigate their health insurance benefits and connects them directly with healthcare services. Ada Health aims to help users by diagnosing conditions based on symptoms.

AI is helpful for medical chatbots because of its ability to analyze large amounts of data to provide more personalized responses to patient inquiries quickly, Tim Lawless, global health lead at digital consultancy Publicis Sapient, told PYMNTS. The strength and specificity of reactions from AI-powered chatbots like ChatGPT increase with the amount of data fed into them. Therefore, he said, it is critical to effectively integrate patient data into generative systems, which can open the door to more powerful possibilities for their use as the technology evolves.

However, Lawless said the accuracy of medical chatbots can vary and often depends on the amount and quality of data they are trained on. Responses from conversational AI tools like ChatGPT can be generic and less accurate if not enough specific data is provided.

“While AI can process and analyze large amounts of data quickly, the interpretation and application of this data still requires human oversight to make sure it is accurate,” he added. “This is particularly true in the medical field, where the stakes are high, and the context is often complex. The accuracy of medical chatbots also depends on their ability to understand and respond to the nuances of human language and emotion.”

The Future of Medicine?

Talking with a bot might save you a trip to the doctor. McGuire said chatbots can allow healthcare providers to offer unprecedented access to tailored medical advice. Detailed chatbot inquiries can also help healthcare providers connect patients with the specific medical services they need. She noted that chatbots can reduce the time clinicians need to spend on patient communications, reducing some of the workload that currently causes clinician burnout.

“However, chatbots aren’t a ‘set it and forget it’ solution,” McGuire said. “Healthcare providers should evaluate the liability considerations that stem from using AI-powered medical chatbots. To maintain standards for accuracy and timeliness, healthcare providers should assign dedicated clinicians to review the answers developed by medical chatbots. A trained medical chatbot is still not a trained clinician.”

In some cases, observers say chatbots can be easier to talk to than humans. Lawless mentioned that chatbots can quickly help simplify medical information and treatment plans, making things more explicit for patients and serving a wide range of people. Often, physicians provide detailed explanations and support when patients might not be best positioned to absorb the information, such as immediately following a procedure. He said patients typically are more prepared to engage with their care a few days later. At these times, when patients have questions or are ready to process the information, medical chatbots can provide essential support, offering assistance around the clock.

“However, it’s crucial to remember that while medical chatbots can offer valuable assistance, they are not a replacement for professional medical advice,” he added. “The integration of AI in healthcare also raises important concerns about data privacy and security that need to be addressed when implementing these tools.”

AI is changing not just how patients interact with bots but also how doctors go about their tasks. Chatbots, like AWS HealthScribe, can recognize speaker roles, categorize dialogues, and identify medical terminology to create initial clinical documentation, Ryan Gross, head of data and applications at Caylent, told PYMNTS. This technology streamlines the data collection and documentation process, freeing healthcare professionals to focus on patient care.

Amazon Bedrock’s generative AI helps healthcare workers by handling complex tasks, managing different types of data, and using tailored language models, according to Gross. These agents can pull information from other healthcare datasets or tools, as well as external sources, and give answers with a score for accuracy and relevance to the context.

“For example, a clinician can use an agent to determine a course of action for a patient with end-stage COPD,” Gross said. “The agent can access the patient’s EHR records, imaging data, genetic data, and other relevant information to generate a detailed response. The agent can also search for clinical trials, medications, and biomedical literature using an index built on Amazon Kendra  to provide the most accurate and relevant information for the clinician to make informed decisions.”