Table of Contents
Fetching ...

Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

Roy Rinberg, Annabelle Michael Carrell, Simon Henniger, Nicholas Carlini, Keri Warr

Abstract

We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compression ratios of approximately 0.03, a 2x improvement over compressing the original response. We further introduce Question-Asking compression (QA), an interactive lossy protocol inspired by the game 'Twenty Questions'. A small model iteratively refines its response by asking yes/no questions to a stronger model, transferring exactly one bit per answer. On 8 benchmarks spanning math, science, and code, 10 binary questions recover 23% to 72% of the capability gap between a small and large model on standard benchmarks and 7% to 38% on harder benchmarks, achieving compression ratios of 0.0006 to 0.004. This is over 100x smaller than prior LLM-based compression (Deletang et al., 2024), suggesting that interactive protocols can transfer knowledge far more efficiently than transmitting full responses.

Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

Abstract

We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compression ratios of approximately 0.03, a 2x improvement over compressing the original response. We further introduce Question-Asking compression (QA), an interactive lossy protocol inspired by the game 'Twenty Questions'. A small model iteratively refines its response by asking yes/no questions to a stronger model, transferring exactly one bit per answer. On 8 benchmarks spanning math, science, and code, 10 binary questions recover 23% to 72% of the capability gap between a small and large model on standard benchmarks and 7% to 38% on harder benchmarks, achieving compression ratios of 0.0006 to 0.004. This is over 100x smaller than prior LLM-based compression (Deletang et al., 2024), suggesting that interactive protocols can transfer knowledge far more efficiently than transmitting full responses.

Paper Structure

This paper contains 92 sections, 2 equations, 24 figures, 26 tables, 3 algorithms.

Figures (24)

  • Figure 1: Compression ratio vs. number of candidates $N$ on AIME (left) and MBPP (right) problems, using Opus as both generator and compressor. Just Ask (blue) achieves the best compression by explicitly requesting succinct rewrites, roughly halving the compression ratio compared to Shortest-of-N methods. Among Shortest-of-N variants, Temperature Sampling (green) outperforms Single Prompt (red), likely because single-prompt solutions share structural patterns. Lower is better.
  • Figure 2: Overview of the compression mechanism and its use in an interactive protocol between an SLM and an LLM.
  • Figure 3: Accuracy of random selection (dashed) versus best-compression selection (solid) on 90 AIME problems using Opus. Random selection reports the expected accuracy of picking uniformly among $N$ candidates. Across all three generation strategies, selecting the most compressible candidate yields accuracy within a few percentage points of random selection.
  • Figure 4: Absolute compression ratio (top) and relative compression ratio normalized to Temperature Sampling at $N{=}1$ (bottom) on AIME (left) and MBPP (right) using Opus. The dashed gray line at 1.0 indicates the baseline: values below 1.0 represent improved compression, while values above 1.0 indicate worse compression than the baseline. Lower is better.
  • Figure 5: Absolute compression ratio (top) and relative compression ratio normalized to Temperature Sampling at $N{=}1$ (bottom) on AIME (left) and MBPP (right) using Haiku. The dashed gray line at 1.0 indicates the baseline: values below 1.0 represent improved compression, while values above 1.0 indicate worse compression than the baseline. Lower is better.
  • ...and 19 more figures