SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding
Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, Deyu Zhou
TL;DR
The paper addresses the latency of reasoning-tree construction in large language models by accelerating the tree-building phase of reasoning tasks. It introduces SeeD, an inference framework that fuses Speculative Scheduled Execution with a rounds-scheduled FCFS policy to run multiple draft generators in parallel while a single target model verifies outputs, preserving the original output distribution. The authors present a two-phase approach (Parallel Drafting and Sequential Verification), an OS-inspired technical principle, and an algorithm to coordinate multiple drafts and verifications. Empirical results on GSM8K, Creative Writing, and Blocksworld show SeeD achieving up to about $1.5\times$ speedups and around $30$ additional tokens per second, with effective GPU utilization and memory management, all without additional training.
Abstract
Large Language Models (LLMs) demonstrate remarkable emergent abilities across various tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based reasoning methods address this by surpassing the capabilities of chain-of-thought prompting, encouraging exploration of intermediate steps. However, such methods introduce significant inference latency due to the systematic exploration and evaluation of multiple thought paths. This paper introduces SeeD, a novel and efficient inference framework to optimize runtime speed and GPU memory management concurrently. By employing a scheduled speculative execution, SeeD efficiently handles multiple iterations for the thought generation and the state evaluation, leveraging a rounds-scheduled strategy to manage draft model dispatching. Extensive experimental evaluations on three reasoning datasets demonstrate superior speedup performance of SeeD, providing a viable path for batched inference in training-free speculative decoding.
