Abnormalities and Artifacts

Interview with Klaus Scheffler on the use of artificial intelligence in medical imaging

July 23, 2024

Artificial intelligence (AI) in medical imaging has boomed over the past 15 years. We spoke to Prof. Dr. Klaus Scheffler, Head of the Department of High-Field Magnetic Resonance, about where and how AI is being used, what hurdles still need to be overcome and what improvements in imaging and diagnosis one can hope for.

 

How is AI being used in medical imaging?

There are two different areas of application. The first is image reconstruction: you shove people into a scanner and want to get out some images. Of course, this should be as accurate and as fast as possible. All the major MRI machine manufacturers are already using AI for this. Secondly, the finished images are sent to a radiologist for evaluation – at least in a clinical setting. Recently, I’ve read some statistics: a radiologist has on average five seconds to assess an image. AI helps by pointing out abnormalities. At present, however, the AI does not do the assessment on its own; it only makes suggestions for the radiologist to check.

Let’s go back to the first point. How exactly is AI used in image reconstruction?

In our research, we try to improve image acquisition. The aim is often to accelerate the process, but in many cases, this means that some steps are omitted. As a result, the problem of calculating the image no longer has a unique solution because the data is insufficient. This issue can be tackled using mathematical methods. Alternatively, one can create a database of good, fully captured images, from which an AI can learn. The trained AI can then estimate which reconstruction of a given image is better or more likely than another.

Talking about clinical validation: how do we know that AI generated MRI images are good?

At times, they are not! Radiologists can sometimes immediately recognize that there are reconstruction artifacts. A few years ago, in a joint project with researchers from the MPI for Intelligent Systems, we calculated a probability map for each reconstruction. Those maps showed indeed that in some areas of the images something might be not quite right. So at least there are methods to determine how likely it is that a reconstruction is useful.

What data is used to train the AI? And is there enough training data?

Training needs extremely large amounts of data. There’s a lot of data from 1.5 or 3 Tesla clinical scanners worldwide, but for us high-field MRI researchers, the situation is trickier. Simply put, not many 9.4 Tesla scanners even exist, and of course we cannot scan thousands of patients ourselves. That’s why we devised a trick: my collaborator Gabriele Lohmann and her team generated synthetic MRI images. To do this, they let an AI which we had developed generate high-resolution 9.4 Tesla images from 3 Tesla scans. We then used these images to teach the AI to reconstruct 9.4 Tesla images from the scanner data.

Could the use of AI trained on data from white people in Europe and North America exacerbate existing disparities in healthcare?

I don't think this is the case; after all, MRI scanners are used all over the world. Also, this technology is not entirely new, so one would certainly be aware of such effects and could compensate for them. And brains don't have a skin color; if there are any differences at all, they seem to be rather negligeable.
However, there is a different issue: AI is usually trained on images of healthy people, since there are – fortunately! –many more of those images than of the pathological ones. Hence, the ill patients in the clinic are sometimes poorly assessed by the AI. In our department, Rahel Heule and her team are currently working on a DFG-funded study to test how well an AI trained on healthy subjects works on patients with multiple sclerosis or tumors.

What are the limitations, but also the opportunities, of using AI in MRI?

As far as its use in diagnostics is concerned, we need a bit of patience. I've heard from neuroradiologists who are testing the new software that they sometimes see spots in the brain which cannot actually exist – mere artifacts. Only a trained human eye can tell the difference.
However, AI is already very useful for estimating the thickness of the cortex, the grey matter. Segmentation into structural units is not easy in this case, because the meninges often cause problems. So we employed 20 students to segment the MRI images by hand and had neuroradiologists check their work. We used these images as ground truth, as a basis for training the AI. A spin-off company in Tübingen is already using our results clinically: they receive images from radiologists, and the algorithm estimates the loss of brain substance in the cortex. This can be a useful tool to diagnose certain forms of dementia.

 

 

 

 

Go to Editor View