Table of Contents
Fetching ...

From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, Zaid Harchaoui

TL;DR

The paper investigates how larger compute budgets at inference time can yield substantial performance gains for LLMs, complementing the traditional emphasis on training-time scaling. It organizes inference-time approaches into three families: token-level generation algorithms that sample tokens sequentially or search at the token level with access to logits; meta-generation algorithms that orchestrate black-box LLM calls in larger generation programs, enable backtracking, and incorporate external data; and efficient generation methods that reduce token costs and latency. Meta-generation is highlighted as a practical route to improve task performance (e.g., problem solving) and steer model outputs, leveraging multiple calls and external tools while potentially mitigating error accumulation. The survey bridges traditional NLP, modern LLM research, and ML-systems perspectives, emphasizing design choices and trade-offs for scalable inference.

Abstract

One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results. However, less attention has been given to the benefits of scaling compute during inference. This survey focuses on these inference-time approaches. We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation. Token-level generation algorithms, often called decoding algorithms, operate by sampling a single token at a time or constructing a token-level search space and then selecting an output. These methods typically assume access to a language model's logits, next-token distributions, or probability scores. Meta-generation algorithms work on partial or full sequences, incorporating domain knowledge, enabling backtracking, and integrating external information. Efficient generation methods aim to reduce token costs and improve the speed of generation. Our survey unifies perspectives from three research communities: traditional natural language processing, modern LLMs, and machine learning systems.

From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

TL;DR

The paper investigates how larger compute budgets at inference time can yield substantial performance gains for LLMs, complementing the traditional emphasis on training-time scaling. It organizes inference-time approaches into three families: token-level generation algorithms that sample tokens sequentially or search at the token level with access to logits; meta-generation algorithms that orchestrate black-box LLM calls in larger generation programs, enable backtracking, and incorporate external data; and efficient generation methods that reduce token costs and latency. Meta-generation is highlighted as a practical route to improve task performance (e.g., problem solving) and steer model outputs, leveraging multiple calls and external tools while potentially mitigating error accumulation. The survey bridges traditional NLP, modern LLM research, and ML-systems perspectives, emphasizing design choices and trade-offs for scalable inference.

Abstract

One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results. However, less attention has been given to the benefits of scaling compute during inference. This survey focuses on these inference-time approaches. We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation. Token-level generation algorithms, often called decoding algorithms, operate by sampling a single token at a time or constructing a token-level search space and then selecting an output. These methods typically assume access to a language model's logits, next-token distributions, or probability scores. Meta-generation algorithms work on partial or full sequences, incorporating domain knowledge, enabling backtracking, and integrating external information. Efficient generation methods aim to reduce token costs and improve the speed of generation. Our survey unifies perspectives from three research communities: traditional natural language processing, modern LLMs, and machine learning systems.

Paper Structure

This paper contains 1 section, 1 figure.

Table of Contents

  1. Introduction