Understanding and Mitigating Tokenization Bias in Language Models
Buu Phan, Marton Havasi, Matthew Muckley, Karen Ullrich
TL;DR
The paper addresses tokenization-induced bias in autoregressive language models by showing that schemes like $MPE$ and $BPE$ bias next-token and character-level predictions, with biases persisting despite more data. It introduces bias-correction methods—Maximum Prefix Correction (MPC) for MPE and a BPE-based bias-correction approach—to compute unbiased character-level probabilities $P(x^{N}_{n+1}|x^{n}_{1})$ from tokenized models without finetuning, with MPC exhibiting linear-in-length complexity. The approach is validated on a Markov-chain setup, where baseline token-conditioned probabilities $P(x_{n+1}|t^{i}_{1})$ fail to recover true dynamics, while the proposed methods accurately recover $P(x^{N}_{n+1}|x^{n}_{1})$ and can simulate token-free behavior. This work provides a theoretical and practical framework for unbiased evaluation and cross-vocabulary inference in tokenized LMs, potentially enabling seamless transfer between tokenized and token-free representations without retraining.
Abstract
State-of-the-art language models are autoregressive and operate on subword units known as tokens. Specifically, one must encode the conditioning string into a list of tokens before passing to the language models for next-token prediction. We show that popular encoding schemes, such as maximum prefix encoding (MPE) and byte-pair-encoding (BPE), induce a sampling bias that cannot be mitigated with more training or data. To counter this universal problem, for each encoding scheme above, we propose a novel algorithm to obtain unbiased estimates from any language model trained on tokenized data. Our methods do not require finetuning the model, and the complexity, defined as the number of model runs, scales linearly with the sequence length in the case of MPE. As a result, we show that one can simulate token-free behavior from a tokenized language model. We empirically verify the correctness of our method through a Markov-chain setup, where it accurately recovers the transition probabilities, as opposed to the conventional method of directly prompting tokens into the language model.
