Table of Contents
Fetching ...

Reward-Based Online LLM Routing via NeuralUCB

Ming-Hua Tsai, Phat Tran

Abstract

This study investigates the use of NeuralUCB for cost-aware large language model (LLM) routing. Existing routing approaches can be broadly grouped into supervised routing methods and partial-feedback methods, each with different tradeoffs in efficiency and adaptivity. We implement a NeuralUCB-based routing policy and evaluate it on RouterBench under a simulated online setting. Experimental results show that the proposed method consistently outperforms random and min-cost baselines in utility reward. Compared with the max-quality reference, our method achieves substantially lower inference cost while maintaining competitive reward. These findings suggest that NeuralUCB is a promising approach for cost-aware LLM routing, while also highlighting remaining challenges in action discrimination and exploration.

Reward-Based Online LLM Routing via NeuralUCB

Abstract

This study investigates the use of NeuralUCB for cost-aware large language model (LLM) routing. Existing routing approaches can be broadly grouped into supervised routing methods and partial-feedback methods, each with different tradeoffs in efficiency and adaptivity. We implement a NeuralUCB-based routing policy and evaluate it on RouterBench under a simulated online setting. Experimental results show that the proposed method consistently outperforms random and min-cost baselines in utility reward. Compared with the max-quality reference, our method achieves substantially lower inference cost while maintaining competitive reward. These findings suggest that NeuralUCB is a promising approach for cost-aware LLM routing, while also highlighting remaining challenges in action discrimination and exploration.

Paper Structure

This paper contains 9 sections, 19 equations, 4 figures, 1 algorithm.

Figures (4)

  • Figure 1: Architecture of UtilityNet. The utility branch predicts the utility reward for each context-action pair, while the gating branch predicts whether UCB-bonus-based action selection should be activated.
  • Figure 2: Reward comparison of NeuralUCB, RouteLLM-BERT, and two simple baselines, random and min-cost, under the simulated online routing setting.
  • Figure 3: Encoder ablation under the simulated online routing setting. We compare four text encoders, multilingual-e5-large-instruct, all-mpnet-base-v2, Qwen3-Embedding-0.6B, and all-MiniLM-L6-v2, using both average reward and cumulative reward.
  • Figure 4: Comparison between the proposed NeuralUCB policy and the max-quality reference in terms of cost and selected quality under the simulated online routing setting.