The advent of generative artificial intelligence (AI) technologies promises to significantly transform the healthcare industry.
“We’ve reached a new frontier in this area … the current capabilities for data mining and analysis and aggregation and dissemination are unprecedented,” Tom O’Neil, managing director at Berkeley Research Group and former chief compliance officer at Cigna, tells PYMNTS as part of the new expert series, “The AI Effect.”
Some of the latest AI innovations, including those designed to help doctors glean insights from healthcare data, and allow users to find accurate clinical information more efficiently, are meant to help put clinician “pajama time” — the time spent on paperwork each night — to bed.
Elsewhere in the field, from AI-driven medical imaging and pathology to remote monitoring and clinical decision-making, the possibilities of applying AI seem endless.
But as hospitals contemplate the integration of AI-driven solutions into their systems, there are several crucial factors that warrant consideration, including applicability of the AI within hospital systems.
“In healthcare, the first and foremost consideration in this new era is transparency. Does the patient, who is also the consumer, truly understand where the data is going and what the potential impact of that dissemination will be?” O’Neil says.
The healthcare industry handles vast amounts of sensitive and personal data. As AI solutions process and store these data sets, the risk of cyberattacks and data breaches becomes more significant.
Compliance with complex global regulations, such as GDPR and HIPAA, adds another layer of complexity to AI integration, O’Neil says, making the need for transparent, ethical and compliant AI solutions essential.
Despite the potential benefits of AI in healthcare, PYMNTS Intelligence shows that a significant portion of adults remain uncomfortable with the idea of AI-driven healthcare decisions. Concerns range from biases in AI algorithms to fears that AI may lead to worse outcomes.
“One of the threshold concerns is that the data sets being worked on for analytical purposes are not applicable to the individual in question. I think that’s a big issue, and it is better to get out there and talk about it than to surround it with an aura of mystery,” O’Neil explains.
Transparency, ongoing education, and open discussions about AI in healthcare can help smooth over the behavioral speed bumps and instill confidence in AI’s role in the healthcare ecosystem.
And, as O’Neil notes, healthcare is a giant sector with many segments — and there exist different roles AI can play across the ecosystem that come weighted with different risks, and different opportunities.
“When you look at what the payer segment does in the delivery of healthcare, there are an incredible number of opportunities for AI and generative AI to be impactful and streamline the process,” O’Neil says. “When you get to the provider segment, that’s where we start to shift in our chairs because we’ve become accustomed to having an encounter be human at the end of the day.”
“The notion that AI would completely replace a physician or another healthcare provider, to me, is a long way off when I look at the scenarios,” he adds.
AI as popularly used is a broad term — and for years, its various iterations have been promising to revolutionize the medical field and streamline physician and administrative workflows.
Understanding the differentiation between technical capabilities, with AI either fully handling a task (an artificial output) or supplementing human-based workflows (an intelligent output), is crucial to assessing the impact and potential of today and tomorrow’s AI solutions.
That’s why, for this series, we asked O’Neil to help parse the categorization of AI in different healthcare contexts.
And he classifies areas like medical imaging and pathology, telemedicine, patient engagement, surgery, and clinical decisioning as avenues where AI is best suited as an aid to human workflows, not an outright replacement. While AI capabilities have advanced significantly and are able to supplement the above processes to make them more efficient and product, there remains a need to have a highly trained human in the loop.
But within areas like drug discovery, remote monitoring, and administrative tasks, O’Neil sees healthcare AI tools as being able to take on the bulk of the work themselves, freeing up humans to focus on other tasks — while at the same acknowledging that there is still the need to mitigate the risks of AI technologies when applying its capabilities.
AI in healthcare represents a promising new frontier, but it also comes with ethical, compliance and privacy considerations and must be approached with care and responsibility.