Medical schools and teaching hospitals in the U.K. and U.S. are increasingly using artificial intelligence (AI)-generated patients to train future doctors.
The move trains students in communication, diagnosis and clinical reasoning, shifting medical education away from episodic, resource-intensive simulations and toward continuous, software-driven practice.
Instead of relying mainly on standardized patients played by actors or limited clinical rotations, programs are deploying virtual patients that respond in real time, adapt to questioning and simulate a wide range of medical and emotional scenarios.
In the U.K., general practitioners and educators have begun integrating AI patients into undergraduate and postgraduate training, according to the BBC. Students interact with lifelike digital patients that speak naturally, display facial expressions and provide consistent answers based on structured medical profiles.
Educators say the systems allow repeated practice of consultations that are often difficult to schedule in real settings, such as sensitive conversations around mental health or chronic illness. The goal is not to replace human practice patients but to give students more opportunities to refine how they listen, explain and respond.
This approach reflects mounting pressure on medical education systems facing faculty shortages, rising costs and limited access to clinical placements. AI-based training tools offer a way to scale practice without adding proportional strain on hospitals or instructors.
From Standardized Patients to Always-On Simulation
For decades, standardized patients have been a core part of medical training, but their use is constrained by cost and availability. AI patients aim to remove those limits by offering on-demand, repeatable simulations that can be used anytime and anywhere.
At NYU Langone Health, faculty are experimenting with AI-driven clinical training environments that combine large language models with retrieval systems grounded in vetted medical knowledge.
As VentureBeat reports, these platforms use agentic architectures that allow virtual patients to evolve during an encounter, changing symptoms or emotional tone based on how a student asks questions. Students can probe deeper, make diagnostic missteps and correct themselves, all while the system tracks decision paths and communication quality.
In Illinois, Southern Illinois University School of Medicine has introduced an AI patient named Randy Rhodes into its curriculum. According to WPSD, students speak with the virtual patient as they would in a clinic, practicing history-taking, differential diagnosis and patient education. Faculty can then review transcripts to assess not just whether students reached the correct diagnosis, but how they interacted along the way.
Instead of preparing for a single high-stakes simulation, students can practice repeatedly, encounter rare conditions that may not appear during rotations and receive structured feedback after every session. AI also standardizes experiences across cohorts, ensuring that all students are exposed to the same core scenarios rather than relying on chance clinical encounters.
Expanding Medical Education
Beyond basic simulation, generative AI is reshaping how medical schools teach and evaluate clinical skills. Harvard Medical School reports that faculty are using AI tools to support training in clinical reasoning, documentation and professionalism, alongside traditional bedside skills. Virtual patients can be designed to represent diverse backgrounds, languages and social contexts, allowing students to practice culturally sensitive care that might not be readily available in local clinical settings.
And other medical schools are using AI models to cut drug research costs as well ChatGPT to train students, according to PYMNTS.
The Association of American Medical Colleges outlines five major ways U.S. schools are employing AI, including simulation, tutoring, assessment and curriculum development. AI patients play a central role in this shift by generating detailed data on student performance. Educators can analyze whether students ask open-ended questions, interrupt patients, miss key symptoms or adjust explanations based on patient understanding.
That data-driven layer marks a departure from traditional evaluation methods. Instead of relying solely on faculty observation during limited encounters, schools can assess patterns across dozens of simulated visits. Instructors can identify strengths and weaknesses early and tailor coaching accordingly. Supporters argue this leads to more consistent training outcomes and better-prepared clinicians.
Challenges remain, including ensuring accuracy, addressing potential bias in training data and integrating new tools into established curricula. Schools are responding by keeping faculty in the loop, curating medical knowledge sources and using AI primarily for formative rather than high-stakes assessment.