Table of Contents
Fetching ...

ViT-Explainer: An Interactive Walkthrough of the Vision Transformer Pipeline

Juan Manuel Hernandez, Mariana Fernandez-Espinosa, Denis Parra, Diego Gomez-Zara

Abstract

Transformer-based architectures have become the shared backbone of natural language processing and computer vision. However, understanding how these models operate remains challenging, particularly in vision settings, where images are processed as sequences of patch tokens. Existing interpretability tools often focus on isolated components or expert-oriented analysis, leaving a gap in guided, end-to-end understanding of the full inference pipeline. To bridge this gap, we present ViT-Explainer, a web-based interactive system that provides an integrated visualization of Vision Transformer inference, from patch tokenization to final classification. The system combines animated walkthroughs, patch-level attention overlays, and a vision-adapted Logit Lens within both guided and free exploration modes. A user study with six participants suggests that ViT-Explainer is easy to learn and use, helping users interpret and understand Vision Transformer behavior.

ViT-Explainer: An Interactive Walkthrough of the Vision Transformer Pipeline

Abstract

Transformer-based architectures have become the shared backbone of natural language processing and computer vision. However, understanding how these models operate remains challenging, particularly in vision settings, where images are processed as sequences of patch tokens. Existing interpretability tools often focus on isolated components or expert-oriented analysis, leaving a gap in guided, end-to-end understanding of the full inference pipeline. To bridge this gap, we present ViT-Explainer, a web-based interactive system that provides an integrated visualization of Vision Transformer inference, from patch tokenization to final classification. The system combines animated walkthroughs, patch-level attention overlays, and a vision-adapted Logit Lens within both guided and free exploration modes. A user study with six participants suggests that ViT-Explainer is easy to learn and use, helping users interpret and understand Vision Transformer behavior.

Paper Structure

This paper contains 21 sections, 4 figures.

Figures (4)

  • Figure 1: ViT-Explainer interface. Top: The input image is divided into non-overlapping patches, decomposed into RGB channels, flattened into vectors, and projected into an embedding space via a learned linear transformation. Bottom: A Transformer encoder block is visualized step by step, including layer normalization, Multi-Head Self-Attention, residual connections, and the MLP sublayer. The classification head produces logits from the class token, and users can navigate the processing stages through guided controls (top-left).
  • Figure 2: Multi-Head Self-Attention: ViT-Explainer animates the full attention computation to show how each head redistributes token information.
  • Figure 3: Attention Mapping: Visualizes how much attention a selected patch (or the [CLS] token) pays to other patches.
  • Figure 4: Logit-Lens: Layer-by-layer logit chart (colored curves for the top predicted classes) and a side panel to select and reshuffle classes for comparison.