Header

AI Medical Imaging Lab

The AI Medical Imaging Lab at the University of Colorado Anschutz is a multidisciplinary group at the intersection of radiology, biomedical engineering, and machine learning. Our mission is to create reliable AI systems that make imaging interpretation faster, more consistent, and more predictive—and to deliver those systems safely into real clinical use. Methodologically, we focus on foundation and vision-language models that align images with radiology reports and clinical data, enabling automated structured reporting, robust lesion detection/segmentation, longitudinal response assessment, and survival/risk prediction.

We emphasize scale and generalization through multi-institutional datasets, data harmonization, rigorous benchmarks, and external validation. Clinically, our programs span FDG and PSMA PET/CT in oncology, CT for pulmonary embolism risk stratification, and multimodal pipelines that combine imaging with EHR signals. Translationally, we collaborate with Brown University and Johns Hopkins and partner with Siemens Healthineers to integrate models into syngo.via and teamplay for evaluation, QA, and eventual deployment.

Our culture is hands-on and collaborative: physicians, engineers, and data scientists co-design studies, annotate data, ship code, and iterate with clinicians. We mentor trainees across levels and share practical tools whenever possible. Ultimately, our goal is simple: AI that is accurate, reliable, and useful—improving reports, decisions, and outcomes for patients.

--Harrison Bai, MD, MS

 

2025 Publications

  1. Zhong, Z., Wang, Y., Wu, J., Hsu, W. C., Somasundaram, V., Bi, L., Kulkarni, S., Ma, Z., Collins, S., Baird, G., Ahn, S. H., Feng, X., Kamel, I., Lin, C. T., Greineder, C., Atalay, M., Jiao, Z., & Bai, H. (2025). Vision-language model for report generation and outcome prediction in CT pulmonary angiogram. NPJ Digital Medicine, 8(1), 432.

    https://www.nature.com/articles/s41746-025-01807-8

  2. Hsu, W. C., Wang, Y., Wu, Y. F., Chen, R., Afyouni, S., Liu, J., Vin, S., Shi, V., Imami, M., Chotiyanonta, J. S., Zandieh, G., Cai, Y., Leal, J. P., Oishi, K., Zaheer, A., Ward, R. C., Zhang, P. J. L., Wu, J., Jiao, Z., Kamel, I. R., Lin, G., & Bai, H. X. (2025). MRI-based ovarian lesion classification via a foundation segmentation model and multimodal analysis: A multicenter study. Radiology, 316(2), e243412

    https://pubs.rsna.org/doi/10.1148/radiol.243412

  3. Nguyen, D. T., Imami, M., Zhao, L. M., Wu, J., Borhani, A., Mohseni, A., Khunte, M., Zhong, Z., Shi, V., Yao, S., Wang, Y., Loizou, N., Silva, A. C., Zhang, P. J., Zhang, Z., Jiao, Z., Kamel, I., Liao, W. H., & Bai, H. (2025). Federated learning for renal tumor segmentation and classification on multi-center MRI dataset. Journal of Magnetic Resonance Imaging, 62(3), 814–824

    https://onlinelibrary.wiley.com/doi/10.1002/jmri.29819

  4. Toruner, M. D., Shi, V., Sollee, J., Hsu, W. C., Yu, G., Dai, Y. W., Merlo, C., Suresh, K., Jiao, Z., Wang, X., Mao, S., & Bai, H. (2025). Artificial intelligence-driven wireless sensing for health management. Bioengineering (Basel), 12(3), 244

    https://www.mdpi.com/2306-5354/12/3/244

  5. Zhong, Z., Wang, Y., Bi, L., Ma, Z., Ahn, S. H., Mullin, C. J., Greineder, C. F., Atalay, M. K., Collins, S., Baird, G. L., Lin, C. T., Stayman, J. W., Kolb, T. M., Kamel, I., Bai, H. X., & Jiao, Z. (2026). Abn-BLIP: Abnormality-aligned bootstrapping language-image pre-training for pulmonary embolism diagnosis and report generation from CTPA. Medical Image Analysis, 107(Pt A), 103786.

    https://doi.org/10.1016/j.media.2025.103786

  6. Dai, Y., Imami, M., Hu, R., Zhang, C., Zhao, L., Kargilis, D. C., Zhang, H., Yu, G., Liao, W. H., Jiao, Z., Zhu, C., Yang, L., & Bai, H. X. (2025). Prediction of motor symptom progression of Parkinson’s disease through multimodal imaging-based machine learning. Journal of Imaging Informatics in Medicine. Advance online publication.

    https://doi.org/10.1007/s10278-025-01583-7

  7. Wang, Y., Shi, V., Hsu, W. C., Dai, Y., Yao, S., Zhong, Z., Zhang, Z., Wu, J., Maxwell, A., Collins, S., Jiao, Z., & Bai, H. X. (2025). Optimizing prompt strategies for SAM: Advancing lesion segmentation across diverse medical imaging modalities. Physics in Medicine & Biology, 70(17).

    https://doi.org/10.1088/1361-6560/adfc20

  8. Dai, Y., Zhong, Z., Qin, Y., Wang, Y., Yu, G., Kobets, A., Swenson, D. W., Boxerman, J. L., Li, G., Robinson, S., Bai, H., Yang, L., Liao, W., & Jiao, Z. (2025). AI model integrating imaging and clinical data for predicting CSF diversion in neonatal hydrocephalus: A preliminary study. Human Brain Mapping, 46(14), e70363.

    https://doi.org/10.1002/hbm.70363

  9. Zhong, Z., Zhang, H., Fayad, F. H., Lancaster, A. C., Sollee, J., Kulkarni, S., Lin, C. T., Li, J., Gao, X., Collins, S., Greineder, C. F., Ahn, S. H., Bai, H. X., Jiao, Z., & Atalay, M. K. (2025). Pulmonary embolism survival prediction using multimodal learning based on computed tomography angiography and clinical data. Journal of Thoracic Imaging, 40(5), e0831. https://doi.org/10.1097/RTI.0000000000000831

 

bai

Harrison Bai, MD, MS

Principal Investigator

[email protected]

Harrison Bai, M.D., M.S. is Associate Professor of Radiology and Vice Chair of Clinical Research at the University of Colorado Anschutz, with an affiliate appointment in Biomedical Engineering at the University of Colorado Boulder. Trained as both a clinician and computational scientist (M.S. in Computer Science; M.S. in Bioinformatics), he is board-certified in diagnostic and interventional radiology. Dr. Bai directs the AI Medical Imaging Lab, focusing on trustworthy AI—especially foundation and vision-language models that integrate images, radiology text, and clinical variables to power automated reporting, lesion detection/segmentation, longitudinal response assessment, and risk prediction.

Wu

Shaoju Wu, PhD, MS

Research Assistant Professor

[email protected]

Shaoju is an Assistant Professor (Research) in Radiology at the University of Colorado School of Medicine. He received his M.S. from the University of Florida (2017), Ph.D. from Worcester Polytechnic Institute (2022), and completed postdoctoral research at Boston Children’s Hospital/Harvard Medical School (2024), where he subsequently served as an Instructor of Radiology (2024–2025). His research focuses on artificial intelligence for biomedical imaging, including deep-learning and vision-language models for image-guided surgery, tumor injury risk assessment, and feature detection. Currently, he is developing multimodal AI systems that integrate imaging, clinical notes, and patient history to improve diagnostic accuracy and build explainable decision-support tools for clinical use.

Yuwei

Yuwei Dai, MD, PhD Candidate

Visiting Researcher

[email protected]

Yuwei Dai is a PhD candidate in Neurology at Central South University, China. She is currently working as a visiting scholar in the lab. Her research interests focus on neuroimaging and its applications in neurological disorders. She is especially interested in integrating advanced neuroimaging techniques with deep learning and AI to develop predictive tools for diagnosis and prediction.

Bak

Soyeon Bak, PhD Candidate

Visiting Researcher

[email protected]

Soyeon Bak is a PhD candidate in Artificial Intelligence at Korea University, South Korea, and a visiting scholar in the lab. Her research focuses on the development of medical vision-language models for medical image understanding. She is particularly interested in enhancing the trustworthiness of such models by mitigating hallucination behavior.

Consortium Partner

Yuli

Yuli Wang, PhD

Johns Hopkins University

[email protected]

Yuli completed his Ph.D. under the supervision of Dr. Harrison Bai, working on multiple research initiatives at the intersection of artificial intelligence and clinical radiology. With a strong passion for translating research into meaningful clinical impact, Yuli aims to advance cancer management through the integration of medical imaging, radiation therapy, and AI-driven analytics, ultimately bridging the gap between algorithmic development and patient-centered care.

Charlie

Cheng-Yi (Charlie) Li, MSE

Johns Hopkins University

[email protected]

Cheng-Yi (Charlie) Li earned his M.S.E. in Biomedical Engineering from Johns Hopkins University in May 2025. His research focuses on advancing multi-modal deep-learning models for interpretable biomedical imaging. He is particularly interested in building clinically trustworthy AI models that integrate data with natural language to improve diagnostic accuracy and decision support.

Department of Radiology

CU Anschutz

Leprino Building

12401 East 17th Avenue

Aurora, CO 80045


720-848-0000

CMS Login