SPICE: Self-Play In Corpus Environments Improves Reasoning
Bo Liu, Chuanyang Jin, Seungone Kim, Weizhe Yuan, Wenting Zhao, Ilia Kulikov, Xian Li, Sainbayar Sukhbaatar, Jack Lanchantin, Jason Weston
TL;DR
SPICE introduces a corpus-grounded self-play framework in which a single model alternates as Challenger and Reasoner to generate and solve document-grounded tasks. The external document corpus provides verifiable signals, preventing hallucination and enabling an automatic curriculum that challenges the Reasoner at the frontier of capability. Empirical results show consistent improvements across mathematical and general reasoning benchmarks across multiple base models, outperforming ungrounded self-play baselines. The approach represents a shift toward sustained, environment-driven self-improvement for large language models with broad transfer across domains.
Abstract
Self-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts in two roles: a Challenger that mines documents from a large corpus to generate diverse reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics, the Challenger creates an automatic curriculum at the frontier of the Reasoner's capability, while corpus grounding provides the rich, near-inexhaustible external signal necessary for sustained improvement. Unlike existing ungrounded self-play methods that offer more limited benefits, SPICE achieves consistent gains across mathematical (+8.9%) and general reasoning (+9.8%) benchmarks on multiple model families. Our analysis reveals how document grounding is a key ingredient in SPICE to continuously generate its own increasingly challenging goals and achieve them, enabling sustained self-improvement.
