May 2026
By Tayler Shaw

When ophthalmologist Emily Cole, MD, steps into the neonatal intensive care unit (NICU) at Children’s Hospital Colorado to evaluate an infant’s eyes for retinopathy of prematurity (ROP), it’s not uncommon for parents to decide to leave during the exam.
“It can be heart-wrenching to watch,” says Cole, an assistant professor of ophthalmology. “These exams are difficult because these babies are very fragile, and we have to move them during their exam.”
What if artificial intelligence could make the exam faster, more objective, and more precise? What if AI could also help detect other systemic diseases through the eyes, an emerging area known as oculomics? AI researcher Praveer Singh, PhD, an assistant professor of ophthalmology, has been collaborating with investigators across the globe to develop AI-based algorithms designed to do exactly that.
With the goal of helping more patients, Singh and Cole have teamed up to further hone the capabilities of AI and understand the best ways for clinicians to apply these tools in the NICU.
“We’ve been developing all these algorithms, but the key is ensuring they have real clinical utility and can be effectively translated from bench to bedside,” says Singh, a faculty member in the Division of Artificial Medical Intelligence in Ophthalmology. “That’s what makes this collaboration so exciting.”
ROP typically affects infants born before 31 weeks or who weigh less than 3.3 pounds. The disease, a leading cause of childhood blindness, is caused by abnormal blood vessels in the retina.
Most cases are mild, with the disease not causing damage to the retina, but the American Association for Pediatric Ophthalmology and Strabismus estimates that between 1,100 and 1,500 infants each year in the U.S. develop a more severe case of ROP that requires medical treatment.
The standard way to screen for ROP is to dilate an infant’s eyes and examine the retina for abnormal vessels. This requires placing a camera on the surface of the eye
A key step for determining whether an infant needs treatment — which could be injecting medication into the eye, performing a laser procedure, or surgery — is assessing the retinal blood vessel tortuosity, meaning the crookedness of the blood vessels. When ROP is severe, the blood vessels can become wavy and wiggly.
“Typically, we describe how wiggly the vessels are — and how severe the disease is — by categorizing the condition as either ‘plus disease,’ meaning very wiggly, ‘pre-plus disease,’ meaning kind of wiggly, or ‘not plus,’ meaning it is not wiggly,” Cole says.
The issue is that not every clinician will categorize in the same way, Cole and Singh explain, which can lead to issues when patients are referred to another provider who may not agree with the original diagnosis.
AI can help change that. Instead of using the three categories, clinicians can use an AI algorithm to scan an image of the retina — which was taken already as part of the ROP exam — and classify how tortuous the vessels are on a scale from one to nine, with nine representing the most severe disease. This is called a vascular severity score.
In 2022, Cole led research that showed the potential of the AI tool to reduce the variability in diagnosing plus disease among patients with ROP.
“The algorithm can predict disease severity using just one picture of the back of the eye. And for me, when I get that vascular severity score, that is a number that can help influence how I approach patient care,” Cole says. “This AI tool helps create a common language for all providers, and it hopefully will reduce our exam time, standardize diagnosis, and help us better predict which babies will need treatment.”
While Singh was a postdoctoral research fellow at Harvard Medical School, he contributed to research that assessed the ability of an AI-based algorithm to diagnose ROP, specifically examining whether the algorithm worked in external datasets from India, Mongolia, and Nepal.
“AI models often generalize poorly on external test sets, especially when imaging devices or patient demographics differ. Surprisingly, our algorithm performed pretty well,” he says. “A likely explanation is that, rather than operating directly on raw fundus images, which can differ substantially in intensity and pigmentation, we first segmented the retinal vasculature and then performed image analysis on those standardized vessel maps. This strategy improved robustness across external sites.”
The research specifically used an AI-derived vascular severity score to identify infants who would develop treatment-requiring ROP. The study found that using the AI tools appeared to help identify high-risk infants and could reduce the number of exams that low-risk infants endured.
One challenge, however, especially in developing countries, is the cost of these imaging devices that capture the retinal images the algorithm requires.
That led to Cole and Singh, in collaboration with investigators from Oregon Health & Science University, to look for cost effective alternatives. They tested the efficacy of smartphone-based telescreening for ROP as a more affordable alternative to expensive imaging cameras. Theresearchers found that, despite lower image quality, a smartphone-based imaging device demonstrated a high probability of accurately detecting severe ROP.
The next step is determining how this tool can best be used in the clinic, Cole explains.
Cole plans to interview a variety of stakeholders — including parents, hospital administrators, data scientists, and care providers — to get their input on this AI tool and potential implementation. She will also look at measures of how clinicians adopt this technology through pilot trials in the NICU and how it affects their clinical workflows.
“There are a lot of possibilities for how this could be useful, and that’s what I want to explore,” she says, explaining the tool could also be useful for telemedicine, for parent education, and when providers are handing off patients to each other.
Using AI to scan images of an infant’s eyes can also provide insight into other systemic health conditions, Singh explains.
He's led research that examined whether retinal images obtained as part of the ROP exam may contain features associated with cardio-pulmonary diseases such as bronchopulmonary dysplasia (BPD) and pulmonary hypertension (PH), which are both leading causes of morbidity and mortality in premature infants. The research suggests that this retinal imaging-based AI tools can potentially predict the diagnosis of BPD and PH in premature infants, which may lead to infants being diagnosed earlier, thus avoiding the need for invasive diagnostic testing in the future.
“We found that we can detect these diseases much earlier by using AI instead of using invasive procedures like catheterization or echocardiogram,” Singh says.
While Singh is building the algorithm and predictive model, Cole is working in parallel to develop a workflow with neonatologists and NICU leaders to determine where this tool would be most helpful in their care.
Singh is also contributing to the development of a foundational dataset exploring oculomics-based biomarkers for different systemic diseases in the neonatal population. At CU Anschutz, he’s helping create a clinical informatics and retinal imaging infrastructure focused on multi-morbidity, with hopes of expanding AI algorithms’ ability to detect a wider range of diseases through ocular image analysis.
Cole predicts one of the biggest challenges of bringing AI into the NICU will be getting clinicians on board with the change, which is why the implementation science approach is critical.
“We’re going about this in a holistic way, where we are doing user testing at the same time as we are doing algorithm development. It’s important that we get the buy-in from clinicians and that they see this is worth it,” Cole says. “This tool can potentially help us predict who will need treatment, improve the speed of exams, standardize diagnosis, and improve access to care.”