IEEE VIS 2024 Content: KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia -

Alex Endert -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
Evaluating generative LLMs for stereotypes and biases is hard. Fill-in-the-blank sentences as prompts can reveal biases, yet many fill-in-the-blank analysis methods are limited to one sentence at a time. Our solution, KnowledgeVIS, makes it easy to create multiple sentence prompts, then visually compare LLM predictions across sentences. We studied how KnowledgeVIS helps developers close the loop of LLM evaluation and contribute guidelines for improving human-in-the-loop NLP. KnowledgeVIS is open-source and live at: https://github.com/AdamCoscia/KnowledgeVIS. For the full story, please read our paper!
Fast forward
Full Video
Keywords

Visual analytics, language models, prompting, interpretability, machine learning.

Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.