IEEE VIS 2025 Content: VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization

VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization

Ayan Biswas -

Terece Turton -

Nishath Ranasinghe -

Shawn Jones -

Bradley Love -

William Jones -

Aric Hagberg -

Han-Wei Shen -

Nathan Debardeleben -

Earl Lawrence -

Image not found

Room: Hall E2

Keywords

Scientific data, Large language models, Agentic workflows, Natural language, Feature-based visualization.

Abstract

We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities—such as threshold-based filtering, slice extraction, and statistical analysis—through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system’s adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., “visualize the skull” or “highlight tissue boundaries”). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.