IEEE VIS 2024 Content: Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn - University of Pittsburgh, Pittsburgh, United States

Quinn K Wolter - School of Computing and Information, University of Pittsburgh, Pittsburgh, United States

Jonilyn Dick - Quest Diagnostics, Pittsburgh, United States

Janet Dick - Quest Diagnostics, Pittsburgh, United States

Yu-Ru Lin - University of Pittsburgh, Pittsburgh, United States

Room: Bayshore I

2024-10-13T17:55:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T17:55:00Z
Exemplar figure, described by caption below
Fast forward
Abstract

Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.