Table of Contents
Fetching ...

The Necessity of Setting Temperature in LLM-as-a-Judge

Lujun Li, Lama Sleem, Yangjie Xu, Yewei Song, Aolin Jia, Jerome Francois, Radu State

Abstract

LLM-as-a-Judge has emerged as an effective and low-cost paradigm for evaluating text quality and factual correctness. Prior studies have shown substantial agreement between LLM judges and human experts, even on tasks that are difficult to assess automatically. In practice, researchers commonly employ fixed temperature configurations during the evaluation process-with values of 0.1 and 1.0 being the most prevalent choices-a convention that is largely empirical rather than principled. However, recent researches suggest that LLM performance exhibits non-trivial sensitivity to temperature settings, that lower temperatures do not universally yield optimal outcomes, and that such effects are highly task-dependent. This raises a critical research question: does temperature influence judge performance in LLM centric evaluation? To address this, we systematically investigate the relationship between temperature and judge performance through a series of controlled experiments, and further adopt a causal inference framework within our empirical statistical analysis to rigorously examine the direct causal effect of temperature on judge behavior, offering actionable engineering insights for the design of LLM-centric evaluation pipelines.

The Necessity of Setting Temperature in LLM-as-a-Judge

Abstract

LLM-as-a-Judge has emerged as an effective and low-cost paradigm for evaluating text quality and factual correctness. Prior studies have shown substantial agreement between LLM judges and human experts, even on tasks that are difficult to assess automatically. In practice, researchers commonly employ fixed temperature configurations during the evaluation process-with values of 0.1 and 1.0 being the most prevalent choices-a convention that is largely empirical rather than principled. However, recent researches suggest that LLM performance exhibits non-trivial sensitivity to temperature settings, that lower temperatures do not universally yield optimal outcomes, and that such effects are highly task-dependent. This raises a critical research question: does temperature influence judge performance in LLM centric evaluation? To address this, we systematically investigate the relationship between temperature and judge performance through a series of controlled experiments, and further adopt a causal inference framework within our empirical statistical analysis to rigorously examine the direct causal effect of temperature on judge behavior, offering actionable engineering insights for the design of LLM-centric evaluation pipelines.

Paper Structure

This paper contains 22 sections, 6 equations, 3 figures, 4 tables.

Figures (3)

  • Figure 1: LLM-as-a-judge Paradigm
  • Figure 2: Pearson correlation matrices between decoding temperature (T) and evaluation metrics—agreement (A), consistency (C), and error rate (E)—across four LLMs.
  • Figure 3: Comprehensive ATE Analysis Across Moderators, Temperature, and Feature Importance.Row 1: Temperature trends of different models (shared legend), showing mean tendencies across temperature ranges (lines indicate averages, not ATE values). Row 2: Boxplots of Average Treatment Effect (ATE) distributions across three moderators for each outcome metric (dashed line denotes ATE = 0). Row 3: SHAP beeswarm plots ranking feature importance by impact magnitude for each outcome metric. All panels share consistent bold styling and bright color palettes for clarity.