Table of Contents
Fetching ...

Researchers waste 80% of LLM annotation costs by classifying one text at a time

Christian Pipal, Eva-Maria Vogel, Morgan Wack, Frank Esser

Abstract

Large language models (LLMs) are increasingly being used for text classification across the social sciences, yet researchers overwhelmingly classify one text per variable per prompt. Coding 100,000 texts on four variables requires 400,000 API calls. Batching 25 items and stacking all variables into a single prompt reduces this to 4,000 calls, cutting token costs by over 80%. Whether this degrades coding quality is unknown. We tested eight production LLMs from four providers on 3,962 expert-coded tweets across four tasks, varying batch size from 1 to 1,000 items and stacking up to 25 coding dimensions per prompt. Six of eight models maintained accuracy within 2 pp of the single-item baseline through batch sizes of 100. Variable stacking with up to 10 dimensions produced results comparable to single-variable coding, with degradation driven by task complexity rather than prompt length. Within this safe operating range, the measurement error from batching and stacking is smaller than typical inter-coder disagreement in the ground-truth data.

Researchers waste 80% of LLM annotation costs by classifying one text at a time

Abstract

Large language models (LLMs) are increasingly being used for text classification across the social sciences, yet researchers overwhelmingly classify one text per variable per prompt. Coding 100,000 texts on four variables requires 400,000 API calls. Batching 25 items and stacking all variables into a single prompt reduces this to 4,000 calls, cutting token costs by over 80%. Whether this degrades coding quality is unknown. We tested eight production LLMs from four providers on 3,962 expert-coded tweets across four tasks, varying batch size from 1 to 1,000 items and stacking up to 25 coding dimensions per prompt. Six of eight models maintained accuracy within 2 pp of the single-item baseline through batch sizes of 100. Variable stacking with up to 10 dimensions produced results comparable to single-variable coding, with degradation driven by task complexity rather than prompt length. Within this safe operating range, the measurement error from batching and stacking is smaller than typical inter-coder disagreement in the ground-truth data.

Paper Structure

This paper contains 1 section, 2 figures.

Table of Contents

  1. Supporting Information

Figures (2)

  • Figure 1: Batching and stacking are safe within bounds. (A) Cost savings (%) vs. accuracy change (pp) relative to $b = 1$ for each model at each batch size. The dashed line traces the mean trajectory across all eight models; the shaded region marks the min--max range. Batch size labels mark progression along this trajectory. Six models remain near zero accuracy loss through $b = 100$ (${\sim}84\%$ cost savings). Two OpenAI reasoning models (green) collapse at $b \geq 250$, driving the shaded region downward. (B) Overall accuracy (mean across four variables) under stacking conditions at $b = 25$. $k = 1$: single-variable baseline from Study 1. $k = 4, 10, 25$: number of simultaneous coding dimensions in Study 2. Control (orange): same prompt length as $k = 25$ but only four variables plus filler text, showing that degradation reflects task complexity, not prompt length.
  • Figure 2: Accuracy by model, variable, and batch size (Study 1). Each panel shows one of the four variables. Bars represent batch sizes from $b = 1$ (darkest) to $b = 1000$ (lightest). Six of eight models maintain stable accuracy through $b = 100$ across all four variables. GPT-5 Nano shows progressive degradation starting at $b = 5$. Topic accuracy increases with batch size for several models, possibly reflecting an auto-demonstration effect in which co-presented tweets provide implicit distributional information.