A PYMNTS Company

Microsoft Expands Into Medical Superintelligence Amid Growing Focus on AI Regulation

 |  November 6, 2025

Microsoft is launching an ambitious new initiative to develop artificial intelligence systems that can outperform humans in specialized domains, beginning with medical diagnostics, according to Reuters. The project, known as the MAI Superintelligence Team, will pursue what the company calls “humanist superintelligence” — AI designed to solve defined real-world problems — rather than creating autonomous systems that could pose control risks.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Mustafa Suleyman, Microsoft’s AI chief and a co-founder of DeepMind, told Reuters that the company plans to invest “a lot of money” into the effort. Karen Simonyan will serve as chief scientist, and the team will include both existing Microsoft researchers and recruits from top AI labs. Suleyman emphasized that, unlike peers chasing general-purpose AI, Microsoft’s focus is on targeted systems capable of delivering superhuman results in areas such as medical analysis and diagnostics.

    “Humanism requires us to always ask the question: does this technology serve human interests?” said Suleyman. He explained that Microsoft’s approach aims to develop specialist models with “virtually no existential risk whatsoever,” pointing to potential breakthroughs in fields such as molecular design and energy storage.

    According to Reuters, the company envisions AI systems that can reason through medical problems to detect preventable diseases earlier, potentially extending life expectancy. Suleyman said Microsoft has “a line of sight to medical superintelligence in the next two to three years,” suggesting the company’s growing confidence in the capability of diagnostic AI.

    Growing Regulatory Scrutiny Around Medical AI

    Microsoft’s move comes as regulators worldwide tighten oversight of artificial intelligence in healthcare. AI-driven diagnostic tools — which can interpret scans, pathology results, and other data — are transforming medicine but raising questions about safety, transparency, and accountability.

    Read more: UK Forms National Commission to Regulate AI in Health Care

    In the United States, the Food and Drug Administration (FDA) has introduced its “Artificial Intelligence and Machine Learning Software as a Medical Device (SaMD) Action Plan.” This framework outlines how AI systems will be evaluated for safety and performance, especially those that continue to learn after deployment. The FDA’s most recent draft guidance focuses on lifecycle management, ensuring that adaptive AI tools remain reliable and clinically validated as they evolve.

    The European Union is taking a similarly rigorous approach. Under its forthcoming Artificial Intelligence Act, medical AI is categorized as “high-risk,” requiring strict measures for human oversight, data governance, and algorithmic transparency. These rules complement the EU’s Medical Device Regulation (MDR), which already covers AI-based diagnostic software. Globally, the World Health Organization has called for robust frameworks to ensure fairness, effectiveness, and accountability in medical AI systems.

    Balancing Innovation and Responsibility

    Per Reuters, Microsoft’s emphasis on a “humanist” model aligns with the growing demand for ethical, transparent AI development in healthcare. As AI becomes more capable of making or supporting medical decisions, the need for oversight has intensified. Regulators and policymakers are grappling with how to balance rapid innovation with patient safety and data protection.

    AI models used in medicine face unique challenges, including bias in training data, the opacity of machine-learning algorithms, and questions of liability if an AI system misdiagnoses a patient. To mitigate these issues, experts emphasize the importance of human supervision, explainability, and ongoing monitoring after deployment.

    For Microsoft, compliance with emerging regulatory frameworks will be crucial. Any AI that reaches “superhuman” levels of diagnostic ability must still meet medical device standards, maintain clinical transparency, and undergo continuous evaluation. The company’s vision of technology that “serves human interests” will likely depend on how effectively it aligns its breakthroughs with the evolving rules governing medical AI.

    Source: Reuters