Table of Contents
Fetching ...

Cog-DRIFT: Exploration on Adaptively Reformulated Instances Enables Learning from Hard Reasoning Problems

Justin Chih-Yao Chen, Archiki Prasad, Zaid Khan, Joykirat Singh, Runchu Tian, Elias Stengel-Eskin, Mohit Bansal

Abstract

Reinforcement learning from verifiable rewards (RLVR) has improved the reasoning abilities of LLMs, yet a fundamental limitation remains: models cannot learn from problems that are too difficult to solve under their current policy, as these yield no meaningful reward signal. We propose a simple yet effective solution based on task reformulation. We transform challenging open-ended problems into cognitively simpler variants -- such as multiple-choice and cloze formats -- that preserve the original answer while reducing the effective search space and providing denser learning signals. These reformulations span a spectrum from discriminative to generative tasks, which we exploit to bootstrap learning: models first learn from structured, easier formats, and this knowledge transfers back to improve performance on the original open-ended problems. Building on this insight, we introduce Cog-DRIFT, a framework that constructs reformulated variants and organizes them into an adaptive curriculum based on difficulty. Training progresses from easier to harder formats, enabling the model to learn from problems that previously yielded zero signal under standard RL post-training. Cog-DRIFT not only improves on the originally unsolvable hard problems (absolute +10.11% for Qwen and +8.64% for Llama) but also generalizes well to other held-out datasets. Across 2 models and 6 reasoning benchmarks, our method consistently outperforms standard GRPO and strong guided-exploration baselines. On average, Cog-DRIFT shows +4.72% (Qwen) and +3.23% (Llama) improvements over the second-best baseline. We further show that Cog-DRIFT improves pass@k at test time, and the curriculum improves sample efficiency. Overall, our results highlight task reformulation and curriculum learning as an effective paradigm for overcoming the exploration barrier in LLM post-training.

Cog-DRIFT: Exploration on Adaptively Reformulated Instances Enables Learning from Hard Reasoning Problems

Abstract

Reinforcement learning from verifiable rewards (RLVR) has improved the reasoning abilities of LLMs, yet a fundamental limitation remains: models cannot learn from problems that are too difficult to solve under their current policy, as these yield no meaningful reward signal. We propose a simple yet effective solution based on task reformulation. We transform challenging open-ended problems into cognitively simpler variants -- such as multiple-choice and cloze formats -- that preserve the original answer while reducing the effective search space and providing denser learning signals. These reformulations span a spectrum from discriminative to generative tasks, which we exploit to bootstrap learning: models first learn from structured, easier formats, and this knowledge transfers back to improve performance on the original open-ended problems. Building on this insight, we introduce Cog-DRIFT, a framework that constructs reformulated variants and organizes them into an adaptive curriculum based on difficulty. Training progresses from easier to harder formats, enabling the model to learn from problems that previously yielded zero signal under standard RL post-training. Cog-DRIFT not only improves on the originally unsolvable hard problems (absolute +10.11% for Qwen and +8.64% for Llama) but also generalizes well to other held-out datasets. Across 2 models and 6 reasoning benchmarks, our method consistently outperforms standard GRPO and strong guided-exploration baselines. On average, Cog-DRIFT shows +4.72% (Qwen) and +3.23% (Llama) improvements over the second-best baseline. We further show that Cog-DRIFT improves pass@k at test time, and the curriculum improves sample efficiency. Overall, our results highlight task reformulation and curriculum learning as an effective paradigm for overcoming the exploration barrier in LLM post-training.

Paper Structure

This paper contains 15 sections, 3 equations, 4 figures, 5 tables.

Figures (4)

  • Figure 1: (A) If a problem is too hard (e.g., pass@64=0), the model cannot learn from it. Reformulation into an MCQ or a cloze task can effectively reduce cognitive load, as well as difficulty for the model. (B) We find that learning from cognitive-load-reduced tasks transfers back to the original hard questions (i.e., learning from MCQ/cloze also improves open-ended questions). (C) These improvements on the original hard questions also generalize to held-out datasets. Results in (B) and (C) are based on Llama3.2-3B-Instruct.
  • Figure 2: (a) Reformulating open-ended math problems into alternative formats consistently increases accuracy by easing structural constraints. (b) After Rejection Fine-Tuning (RFT), these performance gains successfully transfer back to the original open-ended problems used for training. (c) On the unseen MATH 500 benchmark, the model trained with an easier format incurs only a modest drop.
  • Figure 3: When training on hard open-ended problems and evaluating on AIME24, AIME25, and GPQA using Qwen, Cog-DRIFT generally achieves higher pass@k compared to both the base Qwen model and the GRPO-trained model, particularly as k increases.
  • Figure 4: Left: Instance-level curriculum adaptively reallocates samples from easier (MCQ) to harder (OEQ) reformulations based on per-instance accuracy, leading to improved sample efficiency and continued performance gains. Right: A static uniform mixture (always 25% for each format) shows stagnated performance improvement. Test accuracy is reported on open-ended questions from OmniMATH-Hard.