Generative artificial intelligence (AI) faces many growing pains as it seeks its footing. In a recent survey, management consulting firm Bain & Company reveals that the main barriers to generative AI in healthcare are a lack of resources, expertise and regulation, with data access and quality and organizational resistance close behind. The recurring theme across industry studies is that generative AI in healthcare is nascent and needs time to prove its efficacy and to earn public trust.
For generative AI to bear fruit, it needs investors, technology, resources, expertise, LLMs trained with healthcare-specific data, and robust guidelines. Ultimately, society needs to be ready for and comfortable with the involvement of generative AI in managing patient health and treatment.
The task ahead entails expanding resources and expertise, training models on healthcare-specific data and establishing robust benchmarks. In time, through multiple iterations, generative AI will sharpen its analysis. Developers will fine-tune the benchmarks and criteria used in their models, aligning them with emerging regulatory guidelines. The result will be improved accuracy in diagnoses and recommended treatments.
Regulators are urged to move fast, but doing so too quickly could bring unintended consequences. The same caution applies to technology companies that are eager to earn a profit from their innovations. AI tools that gain FDA approval are setting a precedent for what will become commonplace in the future. The potential downside may be that ethical dilemmas will prompt companies and regulators to reconsider their positions.
According to a recent study, Americans have mixed feelings about the use of generative AI in their healthcare. While they are enthusiastic about the potential benefits, most Americans are hesitant to replace their medical professionals with this technology. These sentiments may shift over time as technology matures and regulatory guidelines become more robust.