IEEE VIS 2025 Content: CFTree: Exploring Paths Through Counterfactuals

CFTree: Exploring Paths Through Counterfactuals

Fang Cao -

Eli Brown -

Image not found

Room: Room 1.14

Keywords

counterfactuals, human-centered AI, interpretable machine learning, visual analytics

Abstract

Machine learning algorithms may be chosen for their effectiveness at predictions, but they lose impact when human decision makers do not understand the predictions well enough to trust them. Though machine learning algorithms that produce interpretable models are available, many factors influence the selection for a particular application, and many tools exist for building understanding post-hoc. One way to develop insight is to probe through a concept humans use in making decisions -- an examination of alternate scenarios that might change the outcome called counterfactuals. In this short paper, we build on work using counterfactuals for comprehending machine learning models by proposing a technique wherein users explore counterfactuals as paths through a tree of possible data attribute changes. We extend the technique to groups of data points and consequently to groups of counterfactuals. We provide a prototype implementation and and evaluation with four users from different fields of expertise who were able to apply CFTree to their own domain data and discover interesting attribute relationships.