Skip to main content
pancreatic cyst CT image
April 1, 2022

A closer look: Machine learning collaborations are helping to improve medical imaging

Written By:

Data science and machine learning are becoming almost ubiquitous in many facets of our daily lives.

Our healthcare is no exception—and among the areas where these algorithms are poised to make a major impact is medical imaging.

Technologies that allow us to look inside the human body have made extraordinary gains since the first x-rays were used to identify broken bones more than a century ago. Now, hospitals use sophisticated MRIs and CT machines to look for things like cancers or other abnormalities. But one thing hasn’t changed much over the decades: It still takes a highly trained medical professional to examine and interpret those images, requiring enormous resources for training and time spent examining.

Dane Morgan
Dane Morgan

Researchers and doctors, however, are hoping machine learning will be a complimentary set of “eyes,” able to look faster and differently into the images to increase efficiency, reduce errors, and potentially identify features and emerging problems that humans alone can’t discern.

That’s why faculty across the University of Wisconsin-Madison, including researchers in the College of Engineering, School of Medicine and Public Health and Department of Computer Sciences are collaborating on an initiative called Machine Learning for Medical Imaging. The goal is to connect physicians and data scientists to build systems that can improve and automate tricky diagnoses.

“We at UW-Madison see enormous potential for machine learning in the medical world,” says Dane Morgan, Harvey D. Spangler Professor in materials science and engineering. “For example, we hope we can build systems that can look at an image and say this is not cancer or this is and here’s where it’s located. Such developments are transforming radiology.”

Machine Learning for Medical Imaging began in 2018 and after several years of collaboration and grant funding is beginning to bear fruit. For instance, chemical engineering PhD student Shengli Jiang has worked with Victor Zavala, Baldovin-DaPra Professor in chemical and biological engineering, and researchers in medical physics to develop a machine learning classifier that can predict severe asthma progression using CT scans of the lungs. Kevin Johnson, an assistant professor of medical physics, has worked on a project using neural networks to improve the speed and accuracy of MRI machines, allowing them to diagnose new types of disease. Varun Jog, assistant professor of electrical and computer engineering, worked with Alan McMillan, an associate professor of clinical health science, to develop an open-source deep-learning platform to help biomedical researchers and physicians use the tool for medical imaging.

Meghan Lubner, a professor of radiology, has worked with Dane Morgan on two projects. Their first collaboration was an algorithm designed to identify aggressive features of renal cell carcinoma, or kidney cancer.

Meghan Lubner
Meghan Lubner

“A CT scan is a picture, but it’s also basically a big file of digital data. We extract features from that with software, then we learn the correlation between properties of the carcinoma and features we extract,” says Morgan. “The goal is to be able to take a picture and predict if this is going to be an aggressive carcinoma or not and decide whether to do surgery. It could help an expert doctor make decisions more quickly and accurately.”

After the success of that project, published in Abdominal Radiology in April 2021, the duo collaborated on a machine learning project to look at pancreatic cysts. The cysts show up on imaging quite often; however, it’s difficult to assess whether they are benign, malignant or pre-malignant just by looking at them.

“We’ve started a pancreatic cyst clinic at the university to try to sort this problem out,” says Lubner. “It is challenging to know how often we need to surveil these with imaging or when we need to take them out.”

For the project Lubner provided an annotated dataset of CT scans of about 100 pancreatic cysts that after surgery were categorized as having malignant potential or not. To classify those cysts, the team trained gradient boosting decision tree models to analyze the scans. They then used shape additive explanation analyses to extract features and variables that seemed to indicate cancerous or pre-cancerous lesions. They published their results in the journal Abdominal Radiology in October 2021.

Neither of these algorithms is ready for clinical use; the researchers need to incorporate more data sets and refine the algorithms. But Lubner says the most important part of the project was establishing solid methods for other researchers who might want to improve on the work or add data.

“This was very methodologically sound, which I think is really needed right now because there’s a lot of varying degrees of expertise and quality in the literature in the radiology space,” she says. “It was great for us to be able to work with the engineering group because they have a very rigorous methodology that we might not have come to on our own.”

Morgan says he hopes these types of collaborations continue. “The success of these projects was possible because the Machine Learning for Medical Imaging Initiative supported new interactions and interdisciplinary groups at UW-Madison,” he says. “I am grateful to everyone who had the vision and made the effort to develop the Machine Learning for Medical Imaging initiative as it has enabled collaborations that otherwise would never have happened.”

The initiative has also received sponsorship from the Grainger Institute for Engineering at UW-Madison and the university’s radiology and medical physics departments.