TrustME: A Context-Aware Explainability Model to Promote User Trust in Guidance
Maath Musleh -
Renata Georgia Raidou -
Davide Ceneda -

Screen-reader Accessible PDF
DOI: 10.1109/TVCG.2025.3562929
Room: Hall E
Keywords
Artificial intelligence, Buildings, Explainable AI, Decision making, Context modeling, Boosting, Analytical models, Adaptation models, Visual analytics, Usability
Abstract
Guidance-enhanced approaches are used to support users in making sense of their data and overcoming challenging analytical scenarios. While recent literature underscores the value of guidance, a lack of clear explanations to motivate system interventions may still negatively impact guidance effectiveness. Hence, guidance-enhanced VA approaches require meticulous design, demanding contextual adjustments for developing appropriate explanations. Our article discusses the concept of explainable guidance and how it impacts the user–system relationship—specifically, a user's trust in guidance within the VA process. We subsequently propose a model that supports the design of explainability strategies for guidance in VA. The model builds upon flourishing literature in explainable AI, available guidelines for developing effective guidance in VA systems, and accrued knowledge on user–system trust dynamics. Our model responds to challenges concerning guidance adoption and context-effectiveness by fostering trust through appropriately designed explanations. To demonstrate the model's value, we employ it in designing explanations within two existing VA scenarios. We also describe a design walk-through with a guidance expert to showcase how our model supports designers in clarifying the rationale behind system interventions and designing explainable guidance.