IEEE VIS 2025 Content: Paraview-MCP: An Autonomous Visualization Agent with Direct Tool Use

Paraview-MCP: An Autonomous Visualization Agent with Direct Tool Use

Shusen Liu -

Haichao Miao -

Peer-Timo Bremer -

Image not found

Room: Hall M1

Keywords

Agent, Tool Use, Model Context Protocol

Abstract

While powerful and well-established, tools like ParaView present a steep learning curve that can discourage many potential users. This work introduces ParaView-MCP, an autonomous agent that integrates modern multimodal large language models (MLLMs) with ParaView to not only lower the barrier to entry but also augment ParaView with intelligent decision support. By leveraging the state-of-the-art reasoning, command execution, and vision capabilities of MLLMs, ParaView-MCP enables users to interact with ParaView through natural language and visual inputs. Specifically, our system adopted the Model Context Protocol (MCP), a standardized interface for model-application communication, which facilitates direct interaction between MLLMs and ParaView's Python API, allowing seamless information exchange between the user, the language model, and the visualization tool itself. Furthermore, by implementing a visual feedback mechanism that allows the agent to observe the viewport, we unlock a range of new capabilities, including recreating visualizations from examples, closed-loop visualization parameter updates based on user-defined goals, and even cross-application collaboration involving multiple tools.