Table of Contents
Fetching ...

Understanding Transformers and Attention Mechanisms: An Introduction for Applied Mathematicians

Michel Fabrice Serret

Abstract

This document provides a brief introduction to the attention mechanism used in modern language models based on the Transformer architecture. We first illustrate how text is encoded as vectors and how the attention mechanism processes these vectors to encode semantic information. We then describe Multi-Headed Attention, examine how the Transformer architecture is built and look at some of its variants. Finally, we provide a glimpse at modern methods to reduce the computational and memory cost of attention, namely KV caching, Grouped Query attention and Latent Attention. This material is aimed at the applied mathematics community and was written as introductory presentation in the context of the IPAM Research Collaboration Workshop entitled "Randomized Numerical Linear Algebra" (RNLA), for the project: "Randomization in Transformer models".

Understanding Transformers and Attention Mechanisms: An Introduction for Applied Mathematicians

Abstract

This document provides a brief introduction to the attention mechanism used in modern language models based on the Transformer architecture. We first illustrate how text is encoded as vectors and how the attention mechanism processes these vectors to encode semantic information. We then describe Multi-Headed Attention, examine how the Transformer architecture is built and look at some of its variants. Finally, we provide a glimpse at modern methods to reduce the computational and memory cost of attention, namely KV caching, Grouped Query attention and Latent Attention. This material is aimed at the applied mathematics community and was written as introductory presentation in the context of the IPAM Research Collaboration Workshop entitled "Randomized Numerical Linear Algebra" (RNLA), for the project: "Randomization in Transformer models".

Paper Structure

This paper contains 15 sections, 27 equations, 9 figures, 3 tables.

Figures (9)

  • Figure 1: Example of character tokenization of at most two characters.
  • Figure 2: Example of sentence embedding for a word-based tokenizer for the phrase "the lazy dog".
  • Figure 3: Example of an attention layer for the embeddings obtained from the tokenization example in Figure \ref{['fig:tok_embd']}. Note that in this example we take the kernel to be linear for simplicity, i.e. for any $x\in\mathbb{R}$, we set $f_\kappa(x)=x$.
  • Figure 4: Encoder-Decoder model
  • Figure 5: Encoder layer of the original Transformer architecture. Note that the $\bigoplus$ sign in the diagram corresponds to the skip connections.
  • ...and 4 more figures