Accepted Workshops

The following workshops went through our submission/review process.


2nd GenAI, Agents, and the Future of VIS

Organizers: Zhu-Tian Chen, University of Minnesota-Twin Cities
Nam Wook Kim, Boston College
Saeed Boorboor, University of Illinois Chicago
Shivam Raval, Harvard University
Pan Hao, University of Minnesota-Twin Cities
Qianwen Wang, University of Minnesota-Twin Cities
Vidya Setlur, Tableau Research

Contact: Zhu-Tian Chen (ztchen@umn.edu)
Website: https://visxgenai.github.io/

Recent advances in agents (i.e., autonomous, goal-driven AI systems that iteratively observe, act, and learn from their environments) offer a fundamentally different approach from traditional AI models that passively respond to input. These agentic AI systems are rapidly reshaping how we approach data-intensive tasks and providing new opportunities for the VIS community. Imagine an agent autonomously generating visualizations to analyze complex data, discovering patterns collaboratively, testing hypotheses, and communicating visual insights at a speed and scale beyond human capability. Yet, the emergence of these powerful systems raises critical questions that the VIS community must address: Could autonomous agents eventually replace human data scientists, and if not, how might they best collaborate? Are current visualization techniques and interfaces, originally designed for human analysts, suitable for agent interactions? How can VIS designers effectively integrate agents into their workflows without compromising human agency? And to what extent should agents help shape and educate the next generation of visualization researchers? Through a mix of keynote talks, paper presentations, and an agentic VIS challenge, this workshop invites researchers and practitioners to share innovative ideas, explore these questions, and discuss strategies to transform the impact of VIS for a future where human and AI agents co-exist.


3rd Workshop on Accessible Data Visualization

Organizers: Brianna Wimer, University of Notre Dame
Pramod Chundury, Independent Researcher
Frank Elavsky, Carnegie Mellon University
Keke Wu, University of Maryland, College Park
Stacy Hsueh, University of Nottingham
Ji Hwan Park, Rochester Institute of Technology
Md Abu Bakar Siddique, Rochester Institute of Technology
Mona Alzahrani, Monash University
Shuqi He, Xi’an Jiaotong-Liverpool University
Gene S-H Kim, Massachusetts Institute of Technology
Naimul Hoque, University of Iowa
Danielle Albers Szafir, University of North Carolina at Chapel Hill
Dominik Moritz, Carnegie Mellon University

Contact: Brianna Wimer (bwimer@nd.edu)
Website: https://accessviz.github.io/

Data visualization is widely applied in fields such as data science, machine learning, healthcare, business, and education. Nevertheless, visual representations may create barriers for individuals with sensory, motor, cognitive, or neurological disabilities. Consequently, the accessibility and visualization research communities have increasingly prioritized the development of accessible data visualizations. Research efforts encompass user studies with people with disabilities to identify access barriers, the formulation of theoretical frameworks, and the creation of technical solutions, including autogenerated textual descriptions, sonification, and tactile or physical artifacts. Despite increased attention, these perspectives remain fragmented across venues and subcommunities, which limits sustained interdisciplinary dialogue. In response to the increasing interest and ongoing challenges at this intersection, the in-person Accessible Data Visualization (AccessViz) workshop aims to convene researchers, practitioners, and members of the disability community to foster collaboration, share innovative findings, and shape the future of accessible data visualization research at IEEE VIS. The workshop is expected to stimulate new contributions and support the development of a sustained research agenda focused on accessibility in visualization at IEEE VIS.


9th Workshop on Visualization for AI Explainability (VISxAI)

Organizers: Alex Bäuerle, Google DeepMind
Angie Boggust, Massachusetts Institute of Technology
Catherine Yeh, Harvard University
Fred Hohman, Apple
Mennatallah El-Assady, ETH Zürich
Hendrik Strobelt, IBM Research

Contact: Alex Bäuerle (bauerlealex@gmail.com)
Website: https://visxai.io/

The VISxAI workshop, throughout its past eight iterations, has been a platform for knowledge exchange between researchers with different backgrounds interested in explaining machine learning models through visualization. Its focus on explainables submissions that visually and interactively explain machine learning concepts ranges in complexity from clustering methods to algorithmic biases. These explainables have served as educational resources that have had an impact beyond the academic community. The workshop also always hosted great keynote speakers that connected the domains of visualization and state-of-the-art machine learning and explored the impact visualization can have on explainability. Following the success of VISxAI’25, our goal for this upcoming iteration of VISxAI is to combine strong interactive explainables and great presentations with the interactivity of breakout sessions and live demos. Participants will be encouraged to exchange ideas about the future of visual explainability, interactive articles, and explorable explanations. Furthermore, we will provide a platform for new visualization and interaction ideas that explain machine learning models.


BELIV 2026: Learning What’s True, Doing What’s Right

Organizers: Sandra Bae, University of Arizona
Jürgen Bernard, University of Zurich
Michael Correll, Northeastern University’s Roux Institute
Mai Elshehaly, City St George’s, University of London
Takanori Fujiwara, University of Arizona
Daniel Keefe, University of Minnesota
Mahsan Nourani, Northeastern University’s Roux Institute

Contact: Mai Elshehaly (mai.elshehaly@city.ac.uk)
Website: https://beliv-workshop.github.io/

Twenty years after the first BELIV in 2006, the BELIV workshop invites contributions on emerging and under-examined methodological challenges in visualization research, and fosters open discussions on how we establish the validity and scope of knowledge acquired in our domain, including all forms of systematic and empirical methods used to acquire this knowledge. The goal is to create space for members of the visualization research community to engage with a process of reflection and meta-discussion on empirical research practices in our domain, for example, on what level of rigor to require of our methods, how to choose methods and methodologies, and how to best communicate the results of empirical research. This year’s focus will be on two pressing concerns in visualization research: 1) building towards truth in the midst of growing challenges to validity such as the pressures of the replication crisis and the ubiquitous presence of AI-augmented data and analytics as well as 2) building towards ethical research in a world in turmoil while maintaining our integrity as researchers and individuals.


Considering Context: Approaches for Responsible Data Practices

Organizers: Ester Scheck, TU Wien
Meghan Kelly, Syracuse University

Contact: Ester Scheck (ester.scheck@geo.tuwien.ac.at)

While various frameworks for documenting data production (e.g., metadata, data biographies, and datasheets for datasets) exist, responsible and reflexive data practices too often focus on data analysis processes and visualization decision-making and ethics. In this workshop, we address this gap by centering data and visualization practitioner perspectives to brainstorm and co-create wireframes, guidelines and strategies, and prototype tools that focus on understanding and incorporating data production context in the visualization workflow. Our workshop design is guided by data feminism, design justice, and feminist mapping and prioritizes interactive exchange and the co-production of knowledge (and tools) to better support ethical data practices throughout visualization workflows.


EduVis: 4th IEEE VIS Workshop on Visualization Education, Literacy, and Activities

Organizers: Christina Stoiber, University of Applied Sciences St. Pölten
Magdalena Boucher, University of Applied Sciences St. Pölten
Fateme Rajabiyazdi, University of Calgary
Mandy Keck, University of Applied Sciences Upper Austria
Jonathan C Roberts, Bangor University
Lonni Besançon, Linköping University
Mathis Brossier, Linköping University
Yixuan Li, Georgia Institute of Technology

Contact: Christina Stoiber (christina.stoiber@ustp.at)
Website: https://ieee-eduvis.github.io/

This is the 4th workshop on visualization education, literacy, and activities. This half-day workshop takes place at the IEEE Vis 2026 conference in Boston, United States, with the option to join online. The EduVis workshop aims to become the primary forum to share and discuss advances, challenges, and methods at the intersection of visualization and education. It addresses an interdisciplinary audience from and beyond visualization, education, learning analytics, science communication, arts and design, psychology, and adjacent fields such as data science and HCI. Now in its 4th edition, the workshop’s annual spotlight topic is Equality, Diversity, and Inclusion (EDI) in education and data visualization. It includes presentations of research papers published in the IEEE Xplore library, educator reports published in the Nightingale Magazine, and poster-style discussions including vis activities.


Grand Unified Grammar of Graphics (GUGOG)

Organizers: Cynthia A Huang, LMU Munich, Munich, Germany & Munich Center for Machine Learning (MCML), Munich, Germany
Matthew Kay, Northwestern University, Evanston, Illinois, United States
Susan R Vanderplas, University of Nebraska, Lincoln, Lincoln, Nebraska, United States
Heike Hofmann, University of Nebraska - Lincoln, Lincoln, Nebraska, United States
Joyce Robbins, Columbia University, New York, New York, United States
Evangeline Reynolds, University of Denver, Denver, Colorado, USA

Contact: Cynthia Huang (cynthia.huang@lmu.de) Website: https://gugog-vis.github.io/2026/

Following Wilkinson’s seminal “Grammar of Graphics” (2005), visualization communities in both statistics and computer science have developed various grammar-based approaches to visualization problems, workflows and usage scenarios. While this diversity reflects the richness of visualization challenges, it also reveals fundamental questions: Why do these grammars differ? What core principles unite them? What opportunities exist for synthesis? Which properties make a visualization system a ‘graphical grammar’? Despite scattered attempts to survey and understand diversity in grammar based systems, we lack systematic frameworks for understanding how these grammars relate, where they succeed or struggle, and what a more unified theoretical foundation might look like. The first workshop for a grand unified grammar of graphics (GUGOG) aims to facilitate interdisciplinary discussion and exploration of these open questions. We invite reflections on past work and recent developments in visualization grammars, synthesis of parallel and overlapping contributions across statistical graphics and information visualization communities, and visions for the future of grammar-based visualization research.


SciFi-VIS: Way Out There — How SciFi and Visualization Influence Each Other

Organizers: Ulrik Günther, Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden, Germany
Julián Méndez, TUD Dresden University of Technology, Dresden, Germany
Gabriela Molina León, Aarhus University, Aarhus, Denmark
Samuel Pantze, Center for Advanced Systems Understanding (CASUS), Görlitz, Germany
Mario Romero, Linköping University, Norrköping, Sweden
Abdulhaq Adetunji Salako, University of Rostock, Rostock, Germany
Annalena Ulschmid, TU Vienna, Vienna, Austria

Contact: Mario Romero (mario.romero@liu.se)
Website: https://scifi-vis.github.io/

We propose a hybrid half-day workshop at IEEE VIS 2026, calling for participation from visualization researchers and science fiction creators in order to develop a systematic understanding of the two-way relationship these communities have long shared. We invite submissions of creative formats showcasing connections and inspiring future research. Our workshop plan includes a keynote, lightning talks, brainstorming, cross-community critique, affinity mapping, and discussion around identified themes.


TopoInVis Connect: Topology meets Artificial Intelligence

Organizers: Federico Iuricich, Clemson University
Yue Zhang, Oregon State University

Contact: Federico Iuricich (fiurici@clemson.edu)

Topological methods are playing an increasingly important role across visualization, machine learning, computational geometry, and other data-intensive disciplines. However, advances in these areas often evolve in parallel, with limited sustained cross-community interaction. TopoInVis Connect is a new workshop series designed to bridge these communities through the unifying lens of topology. The inaugural edition focuses on the intersection of Visualization (VIS) and Artificial Intelligence (AI), two fields that are increasingly leveraging topological techniques to analyze complex, high-dimensional data. The workshop fosters dialogue around structure-aware and interpretable approaches to machine learning, while emphasizing the role of visualization in making topological insights accessible and actionable. It combines peer-reviewed paper presentations with interactive, discussion-driven sessions centered on open challenges at the intersection of topology, visualization, and AI.


Uncertainty Visualization: How to Make it Interpretable, Integrable, and Accessible?

Organizers: Timbwaoga A. J. Ouermi (SCI Institute, University of Utah)
Tushar M. Athawale (Oak Ridge National Laboratory)
Chris R. Johnson (SCI Institute, University of Utah)
Kristi Potter (National Laboratory of the Rockies)
Paul Rosen (SCI Institute, University of Utah)
Dave Pugmire (Oak Ridge National Laboratory)
Antigoni Georgiadou (Oak Ridge National Laboratory)
Tim Gerrits (RWTH Aachen University)
Nadia Boukhelifa (INRAE, University Paris-Saclay)

Contact: Timbwaoga Ouermi (touermi@sci.utah.edu)
Website: https://tusharathawale.github.io/uncertainty-vis-workshop-2026/index.html

The 2024 and 2025 IEEE Uncertainty Visualization Workshops were highly successful, attracting over 75 attendees, including leading visualization researchers, and demonstrating strong community interest. Building on this momentum, we propose to a run 2026 version of the Uncertainty Visualization Workshop which aims to address key issues raised in the previous workshops. Specifically, discussions across the previous two workshops consistently highlighted three persistent bottlenecks—interpretability, integrability, and accessibility— of uncertainty visualization that cut across domains, tools, and user groups. First, although a number of new uncertainty visualization techniques have been developed over the past decade, their growing complexity and diversity make them difficult to interpret for non‑experts and even experienced researchers. Second, this interpretability gap, in turn, hinders integrability: scientists struggle to incorporate uncertainty visualization into their analysis pipelines, and computational overhead of uncertainty propagation further limits its integration into existing workflows. Finally, the lack of uncertainty‑aware capabilities in commonly used tools and software ecosystems reduces accessibility and prevents broader use. This workshop addresses these interconnected challenges by inviting contributions that advance interpretable representations, integrable computational methods, and accessible tools and frameworks. This year, we propose a more interactive structure featuring paper presentations, breakout discussions, and uncertainty-focused software demos (with sufficient backup plans) to directly tackle the identified bottlenecks. These formats are designed to stimulate interdisciplinary exchange across visualization, AI, high-performance computing, and human-centered computing experts, enabling them to articulate open challenges and define a forward-looking research agenda for deploying practical uncertainty-aware systems.


vis4climate: Building a Transdisciplinary Climate Vis Community

Organizers: Christina Humer (ETH Zürich, Switzerland)
Andreas Hinterreiter (JKU Linz, Austria)
Aymeric Ferron (Inria and Université de Bordeaux, France)
Fanny Chevalier (University of Toronto, Canada)
Marc Streit (JKU Linz, Austria)
Menna El-Assady (ETH Zürich, Switzerland)
Luiz A. Morais (CIn-UFPE, Brazil)
Georgia Panagiotidou (King’s College London, UK)
Benjamin Bach (Inria, France)

Contact: Christina Humer (christina.humer@inf.ethz.ch)
Website: https://vis4climate.ivia.ch/

The need to understand, mitigate, and adapt to climate change and its resulting problems is greater than ever. Solutions can take many forms, ranging from understanding key factors in climate modeling to monitoring forests and species distributions to deciding how to model a sustainable energy or transportation grid, and finally, to communicating the implications to non-experts. This 4th IEEE VIS workshop on visualization and climate change aims to continue the discussion of the role visualization can play in mitigating climate change and to build a strong community of academics and practitioners. In contrast to the interdisciplinary role of visualization in other domains, climate change problems include numerous and diverse stakeholders, and therefore calls for transdisciplinary collaborations among these stakeholders. This workshop aims to elevate the role of visualization in combating climate change by creating a space for interactive discussions with invited guests and the scientific community. To this end, the workshop invites a diverse set of guests from policy, community engagement, and science.


Visual Analytics in the Age of Autonomous Scientific Discovery

Organizers: Shayan Monadjemi, Oak Ridge National Laboratory
Gabriel Appleby, National Laboratory of the Rockies
Quan Nguyen, Princeton University
Ayana Ghosh, Indian Institute of Technology, Madras
Christoph Heinzl, University of Passau
Remco Chang, Tufts University

Contact: Shayan Monadjemi (shayan.monadjemi@gmail.com)
Website: https://vaxautosci.org/

Artificial intelligence is rapidly transforming scientific workflows. In emerging self-driving laboratories (SDLs), autonomous agents design experiments, analyze results, and iteratively refine hypotheses within closed-loop pipelines, fundamentally shifting the role of the scientist. This transition creates new opportunities for visual analytics to enable oversight and steering of autonomous processes, facilitate the inspection and refinement of machine-generated hypotheses, and support effective human–AI collaboration in scientific discovery. This workshop positions visual analytics as a core enabler of autonomous scientific discovery and advances two complementary directions: (1) developing methods that support AI-accelerated science, and (2) leveraging AI-accelerated scientific platforms to advance visualization research into AI-driven workflows. We will encourage submissions at the intersection of visual analytics, self-driving labs, and scientific domains (e.g., materials science). The workshop will include an invited keynote presentation, paper presentations, demos, and group discussions that help us articulate a concrete research agenda for visual analytics in the age of autonomous science.


VisxVision: Workshop on Novel Directions in Vision Science and Visualization Research

Organizers: Arran Zeyu Wang, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
Sheng Long, Northwestern University, Evanston, Illinois, United States
Ghulam Jilani Quadri, University of Oklahoma, Norman, Oklahoma, United States
Ouxun Jiang, Northwestern University, Evanston, Illinois, United States
Clementine Zimnicki, University of Wisconsin-Madison, Madison, Wisconsin, United States
Cindy Xiong Bearfield, Georgia Institute of Technology, Atlanta, Georgia, United States
Matthew Kay, Northwestern University, Evanston, Illinois, United States
Danielle Albers Szafir, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States

Contact: Arran Zeyu Wang (zeyuwang@cs.unc.edu)
Website: https://visxvision.com/

Visualization relies heavily on how people perceive and reason about data. While visualization research has drawn on low-level vision science principles, we often do not yet know how well these principles generalize to the more complex processes of viewing and interacting with visualizations. To address this, our Research Track primarily focuses on using vision science methods to better support VIS research. Further, there is an empirical bottleneck within the VIS community: many graphical perception studies remain underreplicated, potentially creating a gap between established theory and reproducible practice. Experimental norms in vision science offer a useful starting point for addressing this bottleneck, yet the VIS community still lacks a dedicated venue for sharing replication results and best practices. Beyond their empirical value, replication studies also offer researchers, particularly those newer to experimental work, a structured entry point for developing rigorous methodology. To this end, we introduce a Replication Track alongside regular tracks, specifically designed to translate the methodological precision and practices of vision science into visualization. VisXVision provides a dedicated forum for researchers at the intersection of vision science, psychology, and data visualization, aiming to promote studies, tools, and discussions towards a more reliable theoretical foundation for visualization research.