Table of Contents
Fetching ...

Robust Batch-Level Query Routing for Large Language Models under Cost and Capacity Constraints

Jelena Markovic-Voronov, Kayhan Behdin, Yuanda Xu, Zhengze Zhou, Zhipeng Wang, Rahul Mazumder

Abstract

We study the problem of routing queries to large language models (LLMs) under cost, GPU resources, and concurrency constraints. Prior per-query routing methods often fail to control batch-level cost, especially under non-uniform or adversarial batching. To address this, we propose a batch-level, resource-aware routing framework that jointly optimizes model assignment for each batch while respecting cost and model capacity limits. We further introduce a robust variant that accounts for uncertainty in predicted LLM performance, along with an offline instance allocation procedure that balances quality and throughput across multiple models. Experiments on two multi-task LLM benchmarks show that robustness improves accuracy by 1-14% over non-robust counterparts (depending on the performance estimator), batch-level routing outperforms per-query methods by up to 24% under adversarial batching, and optimized instance allocation yields additional gains of up to 3% compared to a non-optimized allocation, all while strictly controlling cost and GPU resource constraints.

Robust Batch-Level Query Routing for Large Language Models under Cost and Capacity Constraints

Abstract

We study the problem of routing queries to large language models (LLMs) under cost, GPU resources, and concurrency constraints. Prior per-query routing methods often fail to control batch-level cost, especially under non-uniform or adversarial batching. To address this, we propose a batch-level, resource-aware routing framework that jointly optimizes model assignment for each batch while respecting cost and model capacity limits. We further introduce a robust variant that accounts for uncertainty in predicted LLM performance, along with an offline instance allocation procedure that balances quality and throughput across multiple models. Experiments on two multi-task LLM benchmarks show that robustness improves accuracy by 1-14% over non-robust counterparts (depending on the performance estimator), batch-level routing outperforms per-query methods by up to 24% under adversarial batching, and optimized instance allocation yields additional gains of up to 3% compared to a non-optimized allocation, all while strictly controlling cost and GPU resource constraints.

Paper Structure

This paper contains 35 sections, 5 equations, 14 figures, 4 tables.

Figures (14)

  • Figure 1: Batch-level LLM routing framework assigns an LLM to each query within a batch by solving a constrained optimization problem that maximizes average per-query performance. The formulation enforces both a global monetary cost budget on total inference cost and individual capacity constraints for each LLM. Each query is assigned to exactly one LLM, so among $x_{i,1}, \ldots, x_{i,M}$ only one equals 1 and the rest are 0. Since the true performance of an LLM on a new query is unknown at decision time, it must be estimated via $a_{i,j}$. To account for estimation uncertainty in $a_{i,j}$, the robust variant replaces the point performance estimates in the objective with the lower bounds $\underline{a}_{i,j}$ of their corresponding prediction intervals, thereby optimizing for worst-case performance within the estimated uncertainty range.
  • Figure 2: Comparison of cost inference of each batch for per-query routing \ref{['eq:perquery']}, for different values of $\lambda$. Routing is performed using MIRT, with batches are constructed either randomly to yield approximately uniform difficulty (left) or adversarially to create difficult and easy batches (right).
  • Figure 3: Average test set performance as a function of log total cost under per-query optimization for the two datasets. Each curve corresponds to a different performance estimator and shows the trade-off across varying $\lambda$ parameter. The robust estimators are calculated based on 10% quantile ($Q=10$). We also include individual LLMs in the plots; these points correspond to benchmark-reported performance and cost values used as baselines in the simulation. For plot readability, we restrict these to models whose average test performance exceeds 70% on Dataset 1 and 60% on Dataset 2.
  • Figure 4: Average test performance versus maximal batch-level average per-query cost across batches, comparing per-query optimization and batch-level optimization for both datasets. Results are shown for randomly constructed batches (left) and adversarial batches (right).
  • Figure 5: Average test performance versus GPU budget, comparing data-dependent (optimized) allocation with fixed (pre-specified) numbers of open-source model instances. Colors indicate different bounds on the average cost per query $C$.
  • ...and 9 more figures