By: NewMediaWire
February 11, 2026
Dr. Le Cong and Team Launches MedOS: An AI-XR-Cobot World Model Designed to Assist Clinicians in Real Clinical Environments
Stanford and Princeton Researchers Debut MedOS, an AI-Robotics Co-Pilot to Reduce Error, Support Clinicians, and Bring AI Directly Into Surgical and Hospital Workflows
PALO ALTO, CA - February 11, 2026 (NEWMEDIAWIRE) - The Stanford-Princeton AI Coscientist Team announced today the launch of MedOS, the first AI-XR-Cobot system designed to actively assist clinicians inside real clinical environments.
Created by the interdisciplinary Stanford-Princeton AI Coscientist Team, led by Drs. Le Cong, Mengdi Wang, Zhenan Bao, with clinical collaborators Drs. Rebecca Rojansky and Christina Curtis, MedOS combines smart glasses, robotic arms, and multi-agent AI to form a real-time co-pilot for doctors and nurses. Its mission is simple: reduce medical errors, accelerate precision care, and support overburdened clinical teams.
Physician burnout has reached crisis levels, with over 60% of doctors in the United States reporting symptoms, according to recent studies. MedOS (ai4med.stanford.edu) is designed to alleviate physician burnout, not by replacing clinicians, but by reducing cognitive overload, catching errors, and extending precision through intelligent automation and robotic assistance.
Built on years of innovation from the team's previous breakthrough, the LabOS (ai4lab.stanford.edu), MedOS bridges digital diagnostics with physical action. From operating rooms to bedside diagnostics, the system perceives the world in 3D, reasons through medical scenarios, and acts in coordination with doctors, nurses, and care teams. It has been tested in surgical simulations, hospital workflows, and live precision diagnostics.
MedOS introduces a “World Model for Medicine” that combines perception, intervention, and simulation into a continuous feedback loop. Using smart glasses and robotic arms, it can understand complex clinical scenes, plan procedures, and execute them in close collaboration with clinicians. The platform has shown early promise in tasks such as laparoscopic assistance, anatomical mapping, and treatment planning.
MedOS is modular by design, built to adapt across clinical settings and specialties. In surgical simulations, it has demonstrated the ability to interpret real-time video from smart glasses, identify anatomical structures, and assist with robotic tool alignment, functioning as a true clinical co-pilot. This tight integration of perception, planning, and action is what sets MedOS apart: it’s not just a passive assistant, but an active collaborator in high-stakes procedures.
Breakthrough capabilities include:
-
A multi-agent AI architecture that mirrors clinical reasoning logic, synthesizes evidence, and manages procedures in real time. MedOS achieved 97% accuracy on MedQA (USMLE) and 94% on GPQA, beating frontier AI models like Gemini-3 Pro, GPT-5.2 Thinking, and Claude 4.5 Opus.
-
MedSuperVision, the largest open-source medical video dataset, featuring more than 85,000 minutes of surgical footage from 1,882 clinical experts.
-
Demonstrated success in helping nurses and medical students reach physician-level performance and reducing human error in fatigue-prone environments (Registered nurses: 49% to 77% with MedOS assistance, Medical students: 72% to 91%).
-
Case studies, including uncovering immune side effects of the GLP-1 agonist Semaglutide (Wegovy) from the FDA database, identify prognostic implications of driver gene co-mutations on cancer patients' survival.
MedOS is launching with support from NVIDIA, AI4Science, and Nebius, and has been deployed in early pilots. Clinical collaborators can now request early access.
Dr. Le Cong, leader of the Stanford-Princeton AI Coscientist Team and Associate Professor at Stanford University, said, “The goal is not to replace doctors. It is to amplify their intelligence, extend their abilities, and reduce the risks posed by fatigue, oversight, or complexity. MedOS is not just an assistant. It is the beginning of a new era of AI as a true clinical partner.”
“MedOS reflects a convergence of multi-agent reasoning, human-centered robotics, and XR interfaces,” said Dr. Mengdi Wang, co-leader of the collaboration. “Our goal is a collaborative loop that helps clinicians manage complexity in real time.”
Availability and Upcoming Showcases at Stanford and NVIDIA GTC 2026
MedOS will be showcased at a Stanford event in early March, followed by a public unveiling at NVIDIA GTC conference in March 2026. Media, clinicians, and research institutions interested in demonstrations, pilot collaborations, or interviews may contact the team.
The GTC session information is online at: https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81748/
About the Stanford-Princeton AI Coscientist Team
The Stanford-Princeton AI Coscientist Team is a joint research team dedicated to building the first real-time AI systems designed to work alongside human scientists and clinicians. By combining smart glasses, robotic automation, and multi-agent reasoning, the team brings artificial intelligence into physical research and medical environments. Led by Dr. Le Cong and Dr. Mengdi Wang, the foundational LabOS and the newly-launched MedOS are deployed across leading universities and hospitals to accelerate discovery, reduce human error, and improve scientific and clinical outcomes. The team is backed by collaborations with NVIDIA, Stanford Medicine, and VITURE.
For more information, visit:
Project Page: https://medos-ai.github.io/
Official Site: https://ai4medos.com/
MedOS: AI-XR-Cobot World Model for Clinical Perception and Action - Paper: https://medos-ai.github.io/paper
Press Contact:
Ronnie Welch
ronnie@vewmedia.com
VEW Media
View the original release on www.newmediawire.com
This contant was orignally distributed by NewMediaWire. Blockchain Registration, Verification & Enhancement provided by NewsRamp™. The source URL for this press release is Dr. Le Cong and Team Launches MedOS: An AI-XR-Cobot World Model Designed to Assist Clinicians in Real Clinical Environments.
