IEEE VIS 2025 Content: SEG-RobustEye: Understanding medical image segmentation models

SEG-RobustEye: Understanding medical image segmentation models

Andreea Melania Popa -

Vidya Prasad -

Tim J.M. Jaspers -

Fons van der Sommen -

Anna Vilanova -

Image not found
This paper is relevant for deep learning model developers and researcher and particularly for researcher that work with medical image segmentation. This work can be applied to explore the robustness of an already trained model on a predefined set of relevant input corruptions. It can be applied for any model as it is model agnostic and only needs the optimization model to be updated based on the used evaluation metric and the model loss.
Keywords

Visual analytics, explainable AI, medical imaging, input transformations, robustness analysis, model behavior, deep learning

Abstract

Deep learning (DL) models have proven to be suitable for various applications, achieving state-of-the-art performance. Despite that, they experience notable performance drops when subjected to realistic transformations in input data. Analyzing model behavior under input transformations is essential to preemptively identify possible breaking points and understand what image characteristics might cause them before their failure in the real world. We introduce SEG-RobustEye, a visual analytics (VA) design developed to assist in the evaluation of the robustness of segmentation models for endoscopy images under realistic input transformations. SEG-RobustEye is based on ProactiV [13], a VA framework designed for understanding the behaviour of image-to-image translation models. SEG-RobustEye is a tailored instance of the framework and an extension of ProactiV for medical images, in concrete endoscopic images. These require visual designs that enhance the relevant features for such medical applications, which are different from general natural scenes. SEG-RobustEye is designed to discover features that affect the model behaviour under specific transformations. SEG-RobustEye connects the provided perspectives by ProactiV, i.e., global and instance level, and extends to subgroup level patterns. Subgroup level patterns facilitate the discovery and generalizability of selected subgroups of instances. The value of our approach was verified against real-world cases in endoscopy imaging by DL developers as proof of concept of the potential of SEG-RobustEye and, by extension, of ProactiV. [13] V. Prasad, R. J. van Sloun, A. Vilanova, and N. Pezzotti. ProactiV: Studying deep learning model behavior under input transformations. IEEE Transactions on Visualization and Computer Graphics, 30(8):5651–5665, 2024