IEEE VIS 2024 Content: When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Honorable Mention

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Samuel Reinders - Monash University, Melbourne, Australia

Matthew Butler - Monash University, Melbourne, Australia

Ingrid Zukerman - Monash University, Clayton, Australia

Bongshin Lee - Yonsei University, Seoul, Korea, Republic of. Microsoft Research, Redmond, United States

Lizhen Qu - Monash University, Melbourne, Australia

Kim Marriott - Monash University, Melbourne, Australia

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
We explored how refreshable tactile displays (RTDs) can be combined with conversational agents to assist people who are blind or have low vision (BLV) in undertaking data analysis activities. We used a Wizard-of-Oz method, allowing participants to manipulate charts rendered on the RTD, perform touch gestures, and ask the conversational agent questions to aid their understanding. Pictured is an RTD with a stacked bar chart rendered on the screen. A user is reaching out with both hands, touching raised pins on the RTD that make up the different components of the bar chart.
Fast forward
Keywords

Accessible data visualization, refreshable tactile displays, conversational agents, interactive data exploration, Wizard of Oz study, people who are blind or have low vision

Abstract

Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data exploration and analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants, where they interacted with line charts, bar charts, and isarithmic maps. Our analysis of participants' interactions led to the identification of nine distinct patterns. We also learned that the choice of modalities depended on the type of task and prior experience with tactile graphics, and that participants strongly preferred the combination of RTD and speech to a single modality. In addition, participants with more tactile experience described how tactile images facilitated a deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.