The Perils of Chart Deception: How Misleading Visualizations Affect Vision-Language Models
Ridwan Mahbub -
Mohammed Saidul Islam -
Md Tahmid Rahman Laskar -
Mizanur Rahman -
Mir Tafseer Nayeem -
Enamul Hoque -

Download camera-ready PDF
Download Supplemental Material
Room: Hall E
Keywords
Misleading Visualizations, Large Language Models, Vision Language Models, Taxonomy, Evaluation
Abstract
Information visualizations are powerful tools that help users quickly identify patterns, trends, and outliers, facilitating informed decision-making. However, when visualizations incorporate deceptive design elements—such as truncated or inverted axes, unjustified 3D effects, or violations of best practices—they can mislead viewers and distort understanding, spreading misinformation. While some deceptive tactics are obvious, others subtly manipulate perception while maintaining a façade of legitimacy. As Vision-Language Models (VLMs) are increasingly used to interpret visualizations, especially by non-expert users, it is critical to understand how susceptible these models are to deceptive visual designs. In this study, we conduct an in-depth evaluation of VLMs' ability to interpret misleading visualizations. By analyzing over 16,000 responses from ten different models across eight distinct types of misleading chart designs, we demonstrate that most VLMs are deceived by them. This leads to altered interpretations of charts, despite the underlying data remaining the same. Our findings highlight the need for robust safeguards in VLMs against visual misinformation.