Semantic Pathway: An Interactive Visualization of Hidden States and Token Influence in LLMs
Mithilesh Singh -
Klaus Mueller -
Download camera-ready PDF
Download Supplemental Material

Keywords
Large Language Models, Interpretability, Hidden States, Attention Weights, Semantic Pathway Visualization, Token Influence, Interactive Visualization, Transformer Models
Abstract
Transformer-based language models have demonstrated remarkable capabilities across various tasks, yet their internal mechanisms—such as layered representations, distributed attention, and evolving token semantics—remain challenging to interpret. We present Semantic Pathway, an interactive visual analytics tool designed to reveal how token representations evolve across layers in autoregressive Transformer models such as GPT-2. The system integrates layerwise semantic trajectories, attention overlays, and output probability views into a unified interface, enabling users to trace how meaning accumulates and decisions emerge during generation. To reduce visual and interaction complexity, Semantic Pathway incorporates attention-based influence filtering, optional nearest-token projections, and a Compare Mode for analyzing divergence across alternate outputs. The design prioritizes interpretability and usability, supporting both fine-grained inspection and high-level exploration of sequence modeling behavior. This work contributes to ongoing efforts to make language models more interpretable, educationally accessible, and open to diagnostic insight.