IEEE VIS 2025 Content: VisMoDAl: Visual Analytics for Evaluating and Improving Corruption Robustness of Vision-Language Models

VisMoDAl: Visual Analytics for Evaluating and Improving Corruption Robustness of Vision-Language Models

Huanchen Wang -

Wencheng Zhang -

Zhiqiang Wang -

Zhicong Lu -

Yuxin Ma -

Image not found
Data Scientists in Multi-Modal Model: Use the framework to evaluate the effects of various corruption types, uncover data patterns, and guide augmentation strategies to enhance the robustness of existing vision-language models; Machine Learning Researchers: Leverage the multi-level analysis capabilities of the framework to diagnose performance issues and refine models for deployment in real-world applications, ensuring resilience to corrupted or shifted data; HCI Specialists: Explore the design principles of the framework to create user-centric tools that facilitate intuitive understanding of model behavior, empowering practitioners to make data-driven decisions.
Keywords

Visual analytics, multi-modal model, corruption robustness, image captioning

Abstract

Vision-language (VL) models have shown transformative potential across various critical domains due to their capability to comprehend multi-modal information. However, their performance frequently degrades under distribution shifts, making it crucial to assess and improve robustness against real-world data corruption encountered in practical applications. While advancements in VL benchmark datasets and data augmentation (DA) have contributed to robustness evaluation and improvement, there remain challenges due to a lack of in-depth comprehension of model behavior as well as the need for expertise and iterative efforts to explore data patterns. Given the achievement of visualization in explaining complex models and exploring large-scale data, understanding the impact of various data corruption on VL models aligns naturally with a visual analytics approach. To address these challenges, we introduce VisMoDAl, a visual analytics framework designed to evaluate VL model robustness against various corruption types and identify underperformed samples to guide the development of effective DA strategies. Grounded in the literature review and expert discussions, VisMoDAl supports multi-level analysis, ranging from examining performance under specific corruptions to task-driven inspection of model behavior and corresponding data slice. Unlike conventional works, VisMoDAl enables users to reason about the effects of corruption on VL models, facilitating both model behavior understanding and DA strategy formulation. The utility of our system is demonstrated through case studies and quantitative evaluations focused on corruption robustness in the image captioning task.