
The AI
Medical Imaging Lab at the University of Colorado Anschutz is a
multidisciplinary group at the intersection of radiology, biomedical
engineering, and machine learning. Our mission is to create reliable AI systems
that make imaging interpretation faster, more consistent, and more
predictive—and to deliver those systems safely into real clinical use.
Methodologically, we focus on foundation and vision-language models that align
images with radiology reports and clinical data, enabling automated structured
reporting, robust lesion detection/segmentation, longitudinal response
assessment, and survival/risk prediction.
We
emphasize scale and generalization through multi-institutional datasets, data
harmonization, rigorous benchmarks, and external validation. Clinically, our
programs span FDG and PSMA PET/CT in oncology, CT for pulmonary embolism risk
stratification, and multimodal pipelines that combine imaging with EHR signals.
Translationally, we collaborate with Brown University and Johns Hopkins and
partner with Siemens Healthineers to integrate models into syngo.via and
teamplay for evaluation, QA, and eventual deployment.
Our
culture is hands-on and collaborative: physicians, engineers, and data
scientists co-design studies, annotate data, ship code, and iterate with
clinicians. We mentor trainees across levels and share practical tools whenever
possible. Ultimately, our goal is simple: AI that is accurate, reliable, and
useful—improving reports, decisions, and outcomes for patients.
--Harrison
Bai, MD, MS

Principal Investigator
Harrison Bai, M.D., M.S. is Associate Professor of Radiology and Vice Chair of Clinical Research at the University of Colorado Anschutz, with an affiliate appointment in Biomedical Engineering at the University of Colorado Boulder. Trained as both a clinician and computational scientist (M.S. in Computer Science; M.S. in Bioinformatics), he is board-certified in diagnostic and interventional radiology. Dr. Bai directs the AI Medical Imaging Lab, focusing on trustworthy AI—especially foundation and vision-language models that integrate images, radiology text, and clinical variables to power automated reporting, lesion detection/segmentation, longitudinal response assessment, and risk prediction.

Research Assistant Professor
Shaoju is an Assistant Professor (Research) in Radiology at the University of Colorado School of Medicine. He received his M.S. from the University of Florida (2017), Ph.D. from Worcester Polytechnic Institute (2022), and completed postdoctoral research at Boston Children’s Hospital/Harvard Medical School (2024), where he subsequently served as an Instructor of Radiology (2024–2025). His research focuses on artificial intelligence for biomedical imaging, including deep-learning and vision-language models for image-guided surgery, tumor injury risk assessment, and feature detection. Currently, he is developing multimodal AI systems that integrate imaging, clinical notes, and patient history to improve diagnostic accuracy and build explainable decision-support tools for clinical use.

Visiting Researcher
Yuwei Dai is a PhD candidate in Neurology at Central South University, China. She is currently working as a visiting scholar in the lab. Her research interests focus on neuroimaging and its applications in neurological disorders. She is especially interested in integrating advanced neuroimaging techniques with deep learning and AI to develop predictive tools for diagnosis and prediction.

Visiting Researcher
Soyeon Bak is a PhD candidate in Artificial Intelligence at Korea University, South Korea, and a visiting scholar in the lab. Her research focuses on the development of medical vision-language models for medical image understanding. She is particularly interested in enhancing the trustworthiness of such models by mitigating hallucination behavior.

Johns Hopkins University
Yuli completed his Ph.D. under the supervision of Dr. Harrison Bai, working on multiple research initiatives at the intersection of artificial intelligence and clinical radiology. With a strong passion for translating research into meaningful clinical impact, Yuli aims to advance cancer management through the integration of medical imaging, radiation therapy, and AI-driven analytics, ultimately bridging the gap between algorithmic development and patient-centered care.

Johns Hopkins University
Cheng-Yi (Charlie) Li earned his M.S.E. in Biomedical Engineering from Johns Hopkins University in May 2025. His research focuses on advancing multi-modal deep-learning models for interpretable biomedical imaging. He is particularly interested in building clinically trustworthy AI models that integrate data with natural language to improve diagnostic accuracy and decision support.