Table of Contents
Fetching ...

Beyond Semantic Manipulation: Token-Space Attacks on Reward Models

Yuheng Zhang, Mingyue Huo, Minghao Zhu, Mengxue Zhang, Nan Jiang

Abstract

Reward models (RMs) are widely used as optimization targets in reinforcement learning from human feedback (RLHF), yet they remain vulnerable to reward hacking. Existing attacks mainly operate within the semantic space, constructing human-readable adversarial outputs that exploit RM biases. In this work, we introduce a fundamentally different paradigm: Token Mapping Perturbation Attack (TOMPA), a framework that performs adversarial optimization directly in token space. By bypassing the standard decode-re-tokenize interface between the policy and the reward model, TOMPA enables the attack policy to optimize over raw token sequences rather than coherent natural language. Using only black-box scalar feedback, TOMPA automatically discovers non-linguistic token patterns that elicit extremely high rewards across multiple state-of-the-art RMs. Specifically, when targeting Skywork-Reward-V2-Llama-3.1-8B, TOMPA nearly doubles the reward of GPT-5 reference answers and outperforms them on 98.0% of prompts. Despite these high scores, the generated outputs degenerate into nonsensical text, revealing that RMs can be systematically exploited beyond the semantic regime and exposing a critical vulnerability in current RLHF pipelines.

Beyond Semantic Manipulation: Token-Space Attacks on Reward Models

Abstract

Reward models (RMs) are widely used as optimization targets in reinforcement learning from human feedback (RLHF), yet they remain vulnerable to reward hacking. Existing attacks mainly operate within the semantic space, constructing human-readable adversarial outputs that exploit RM biases. In this work, we introduce a fundamentally different paradigm: Token Mapping Perturbation Attack (TOMPA), a framework that performs adversarial optimization directly in token space. By bypassing the standard decode-re-tokenize interface between the policy and the reward model, TOMPA enables the attack policy to optimize over raw token sequences rather than coherent natural language. Using only black-box scalar feedback, TOMPA automatically discovers non-linguistic token patterns that elicit extremely high rewards across multiple state-of-the-art RMs. Specifically, when targeting Skywork-Reward-V2-Llama-3.1-8B, TOMPA nearly doubles the reward of GPT-5 reference answers and outperforms them on 98.0% of prompts. Despite these high scores, the generated outputs degenerate into nonsensical text, revealing that RMs can be systematically exploited beyond the semantic regime and exposing a critical vulnerability in current RLHF pipelines.

Paper Structure

This paper contains 19 sections, 4 equations, 4 figures, 2 tables, 1 algorithm.

Figures (4)

  • Figure 1: The TOMPA attack pipeline. The attack bypasses the standard decode–re-tokenize interface by applying a perturbation mapping, directly feeding transformed token sequences into the reward model. Trained via reinforcement learning using only scalar reward feedback, the policy automatically discovers adversarial token patterns that receive anomalously high rewards. Despite outperforming GPT-5 reference answers, the resulting outputs collapse into non-linguistic sequences — revealing a critical vulnerability in reward models that lies beyond the semantic regime.
  • Figure 2: Qualitative examples of generated sequences under our attack, presented exactly as decoded by the respective reward model's tokenizer. Despite receiving anomalously high rewards of +21.38 for the Qwen3-8B RM and +36.75 for the Llama-3.1-8B RM, which far exceed their respective GPT-5 gold scores of +9.44 and +15.06, the outputs degenerate into cross-lingual gibberish, code snippets, and reserved tokenizer artifacts. This demonstrates that the reward models assign extremely high scores to token sequences completely devoid of semantic meaning.
  • Figure 3: Impact of response length on reward scores. Responses are truncated at various intervals and evaluated using the target reward models. Despite consisting of highly repetitive patterns, the reward does not scale linearly with length. Instead, both models assign low or negative scores to shorter truncated versions, before exhibiting anomalous reward surges as the sequences approach the maximum length of 2048 tokens.
  • Figure 4: Training curves of attack policy optimization under token mapping perturbation. Mean and maximum rewards are plotted over training steps. The attack policy initially receives strongly negative rewards, but progressively identifies high-reward patterns and eventually surpasses the GPT-5 reference answers (the horizontal dashed line).