IEEE VIS 2024 Content: AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Honorable Mention

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Dazhen Deng - Zhejiang University, Ningbo, China

Chuhan Zhang - Zhejiang University, Hangzhou, China

Huawei Zheng - Zhejiang University, Hangzhou, China

Yuwen Pu - Zhejiang University, Hangzhou, China

Shouling Ji - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-18T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
The interface of AdversaFlow includes a Control Panel (A) to configure model parameters and adjust data sampling, an Embedding View (B) to show the projection of prompts, a Metric Monitor (C) displaying the key performance indicators of the model, an Adversarial Flow to facilitate multi-level exploration of models, an Instance List (E) showing prompt details, and a Flucutaion View (F) for the investigation of token-level uncertainty.
Fast forward
Full Video
Keywords

Visual Analytics for Machine Learning, Artificial Intelligence Security, Large Language Models, Text Visualization

Abstract

Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.