IEEE VIS 2025 Content: VISTA: A Visual Analytics Framework to Enhance Foundation Model-Generated Data Labels

VISTA: A Visual Analytics Framework to Enhance Foundation Model-Generated Data Labels

Xiwei Xuan -

Xiaoqi Wang -

Wenbin He -

Jorge Piazentin Ono -

Liang Gou -

Kwan-Liu Ma -

Liu Ren -

Image not found
Screen-reader Accessible PDF

Room: Hall E1

Keywords

Data integrity, Data models, Frequency modulation, Pipelines, Visual analytics, Measurement, Computational modeling, Labeling, Analytical models, Human in the loop

Abstract

The advances in multi-modal foundation models (FMs) (e.g., CLIP and LLaVA) have facilitated the auto-labeling of large-scale datasets, enhancing model performance in challenging downstream tasks such as open-vocabulary object detection and segmentation. However, the quality of FM-generated labels is less studied as existing approaches focus more on data quantity over quality. This is because validating large volumes of data without ground truth presents a considerable challenge in practice. Existing methods typically rely on limited metrics to identify problematic data, lacking a comprehensive perspective, or apply human validation to only a small data fraction, failing to address the full spectrum of potential issues. To overcome these challenges, we introduce VISTA, a visual analytics framework that improves data quality to enhance the performance of multi-modal models. Targeting the complex and demanding domain of open-vocabulary image segmentation, VISTA integrates multi-phased data validation strategies with human expertise, enabling humans to identify, understand, and correct hidden issues within FM-generated labels. Through detailed use cases on two benchmark datasets and expert reviews, we demonstrate VISTA’s effectiveness from both quantitative and qualitative perspectives.