Table of Contents
Fetching ...

Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback

Jungtaek Kim, Thomas Zeng, Ziqian Lin, Minjae Lee, Chungpa Lee, Jy-yong Sohn, Hyung Il Koo, Kangwook Lee

Abstract

Effective problem solving with Large Language Models (LLMs) can be enhanced when they are paired with external search algorithms. By viewing the space of diverse ideas and their follow-up possibilities as a tree structure, the search algorithm can navigate such a search space and guide the LLM toward better solutions more efficiently. While the search algorithm enables an effective balance between exploitation and exploration of a tree-structured space, the need for an external component can complicate the overall problem-solving process. We therefore pose the following question: Can LLMs or their underlying Transformer architectures approximate a search algorithm? To answer this question, we first introduce a simplified framework in which tree extensions and feedback signals are externally specified, allowing for controlled evaluation of search capabilities. We call this setting unknown tree search with bandit feedback. Within this setting, we show that Transformers are theoretically expressive enough to implement distinct search strategies and can be trained from scratch to approximate those strategies. Our Transformer models exhibit the possibility of generalizing to unseen conditions such as longer horizons or deeper trees. Furthermore, we demonstrate that continued task-focused training unlocks the complete capabilities of a pretrained LLM, by fine-tuning the LLM on search trajectories.

Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback

Abstract

Effective problem solving with Large Language Models (LLMs) can be enhanced when they are paired with external search algorithms. By viewing the space of diverse ideas and their follow-up possibilities as a tree structure, the search algorithm can navigate such a search space and guide the LLM toward better solutions more efficiently. While the search algorithm enables an effective balance between exploitation and exploration of a tree-structured space, the need for an external component can complicate the overall problem-solving process. We therefore pose the following question: Can LLMs or their underlying Transformer architectures approximate a search algorithm? To answer this question, we first introduce a simplified framework in which tree extensions and feedback signals are externally specified, allowing for controlled evaluation of search capabilities. We call this setting unknown tree search with bandit feedback. Within this setting, we show that Transformers are theoretically expressive enough to implement distinct search strategies and can be trained from scratch to approximate those strategies. Our Transformer models exhibit the possibility of generalizing to unseen conditions such as longer horizons or deeper trees. Furthermore, we demonstrate that continued task-focused training unlocks the complete capabilities of a pretrained LLM, by fine-tuning the LLM on search trajectories.

Paper Structure

This paper contains 74 sections, 2 theorems, 48 equations, 20 figures, 4 tables, 6 algorithms.

Key Result

Theorem 1

There exist 3‐layer Transformers with embedding dimension $d = 10 + TB$ that exactly implement the uniform and greedy leaf sampling policies when using a sequential encoding of the trajectory using Leaf-Based Tokenization.

Figures (20)

  • Figure 1: Our perspective on effective problem solving as an iterative process of three phases. Given a prompt that describes a simple problem, i.e., a game of 24, an LLM generates several possible steps, selects the next step, and finally evaluates its potential. Repeating this cycle constructs a tree-structured search space, where each branch represents a potential path of problem solving. This example is generated by GPT-5, and paraphrased to clearly illustrate our definition of effective problem solving. Under this perspective, existing problem formulations and our problem formulation (highlighted in gray) are summarized in Figure \ref{['tab:comparisons']}. Specifically, our formulation assumes that next step selection is carried out by an LLM or a Transformer while state expansion and state evaluation are externally given. Figure \ref{['fig:llms_mab']} shows the results on multi-reward tree search with binary trees of depth 6 and 8 different goal states; refer to Sections \ref{['sec:empirical_analysis']} and \ref{['app:pretexperiment-setup']} for the details of the metric and the experiment, respectively. Existing LLMs are inferior to some of the established algorithms, and Qwen3-8B Thinking is even worse than uniform leaf sampling, which is a naïve strategy.
  • Figure 2: Two environments investigated in this work, where darker cells represent higher reward values and red cells denote cells that are impassable.
  • Figure 3: Behavior cloning results on the multi-reward tree search problem, where each binary tree of depth 6 has 8 different goals and a search step budget is 50.
  • Figure 4: Behavior cloning results on the multi-reward navigation problem, where the size of each problem is 4 $\times$ 4, the wall density of each problem is 0.4, and a search step budget is 50.
  • Figure 5: Comparison of the metric values obtained by reference algorithms and Transformers using the results shown in Figures \ref{['fig:mab_2_6_8']} and \ref{['fig:maze_4_4_04_04']}. $\ell^2$ distance between two vectors regarding the metrics is presented. The shortest and second shortest distances in each row are marked as 1 and 2, respectively.
  • ...and 15 more figures

Theorems & Definitions (2)

  • Theorem 1: Leaf‐Based Search Policies
  • Theorem 2: Path-Based Search Policies