Table of Contents
Fetching ...

SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding

Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, Deyu Zhou

TL;DR

The paper addresses the latency of reasoning-tree construction in large language models by accelerating the tree-building phase of reasoning tasks. It introduces SeeD, an inference framework that fuses Speculative Scheduled Execution with a rounds-scheduled FCFS policy to run multiple draft generators in parallel while a single target model verifies outputs, preserving the original output distribution. The authors present a two-phase approach (Parallel Drafting and Sequential Verification), an OS-inspired technical principle, and an algorithm to coordinate multiple drafts and verifications. Empirical results on GSM8K, Creative Writing, and Blocksworld show SeeD achieving up to about $1.5\times$ speedups and around $30$ additional tokens per second, with effective GPU utilization and memory management, all without additional training.

Abstract

Large Language Models (LLMs) demonstrate remarkable emergent abilities across various tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based reasoning methods address this by surpassing the capabilities of chain-of-thought prompting, encouraging exploration of intermediate steps. However, such methods introduce significant inference latency due to the systematic exploration and evaluation of multiple thought paths. This paper introduces SeeD, a novel and efficient inference framework to optimize runtime speed and GPU memory management concurrently. By employing a scheduled speculative execution, SeeD efficiently handles multiple iterations for the thought generation and the state evaluation, leveraging a rounds-scheduled strategy to manage draft model dispatching. Extensive experimental evaluations on three reasoning datasets demonstrate superior speedup performance of SeeD, providing a viable path for batched inference in training-free speculative decoding.

SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding

TL;DR

The paper addresses the latency of reasoning-tree construction in large language models by accelerating the tree-building phase of reasoning tasks. It introduces SeeD, an inference framework that fuses Speculative Scheduled Execution with a rounds-scheduled FCFS policy to run multiple draft generators in parallel while a single target model verifies outputs, preserving the original output distribution. The authors present a two-phase approach (Parallel Drafting and Sequential Verification), an OS-inspired technical principle, and an algorithm to coordinate multiple drafts and verifications. Empirical results on GSM8K, Creative Writing, and Blocksworld show SeeD achieving up to about speedups and around additional tokens per second, with effective GPU utilization and memory management, all without additional training.

Abstract

Large Language Models (LLMs) demonstrate remarkable emergent abilities across various tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based reasoning methods address this by surpassing the capabilities of chain-of-thought prompting, encouraging exploration of intermediate steps. However, such methods introduce significant inference latency due to the systematic exploration and evaluation of multiple thought paths. This paper introduces SeeD, a novel and efficient inference framework to optimize runtime speed and GPU memory management concurrently. By employing a scheduled speculative execution, SeeD efficiently handles multiple iterations for the thought generation and the state evaluation, leveraging a rounds-scheduled strategy to manage draft model dispatching. Extensive experimental evaluations on three reasoning datasets demonstrate superior speedup performance of SeeD, providing a viable path for batched inference in training-free speculative decoding.

Paper Structure

This paper contains 42 sections, 8 figures, 5 tables, 2 algorithms.

Figures (8)

  • Figure 1: Illustration of four LLM execution strategies for generating 3 sequences in Reasoning Tree construction: (a) Serial, where executions are operated one after another, simplifying resource management but increasing overall execution time; (b) Seiral SD, where speculative decoding is used for each execution; (c) Scheduled SD, which involves several parallel draft models and one target model; (d) Parallel, where multiple executions run concurrently, reducing completion time but increasing GPU HBM. refers to a large target model, signifies a smaller draft model, represents a unit length of execution time.
  • Figure 2: Two main components in reasoning tree construction, which are Thought Generator and State Evaluator, respectively.
  • Figure 3: (a) The scenario where the target model manages the verification of target models at the beginning; (b) Overall scheduling diagram for one target model and three draft models. , , represent Draft Model 1, Draft Model 2, Draft Model 3, respectively. , , denotes the execution times of drafting for each corresponding draft model. refers to Target Model. represents the execution time of the verification phase, while specifies the resampling time in cases of rejection.
  • Figure 4: Analogy between the Operation System scheduler with our proposed SeeD.
  • Figure 5: The variation of speedup performance across three datasets at different acceptance rates $\alpha$.
  • ...and 3 more figures