Table of Contents
Fetching ...

CREBench: Evaluating Large Language Models in Cryptographic Binary Reverse Engineering

Baicheng Chen, Yu Wang, Ziheng Zhou, Xiangru Liu, Juanru Li, Yilei Chen, Tianxing He

Abstract

Reverse engineering (RE) is central to software security, particularly for cryptographic programs that handle sensitive data and are highly prone to vulnerabilities. It supports critical tasks such as vulnerability discovery and malware analysis. Despite its importance, RE remains labor-intensive and requires substantial expertise, making large language models (LLMs) a potential solution for automating the process. However, their capabilities for RE remain systematically underexplored. To address this gap, we study the cryptographic binary RE capabilities of LLMs and introduce \textbf{CREBench}, a benchmark comprising 432 challenges built from 48 standard cryptographic algorithms, 3 insecure crypto key usage scenarios, and 3 difficulty levels. Each challenge follows a Capture-the-Flag (CTF) RE challenge, requiring the model to analyze the underlying cryptographic logic and recover the correct input. We design an evaluation framework comprising four sub-tasks, from algorithm identification to correct flag recovery. We evaluate eight frontier LLMs on CREBench. GPT-5.4, the best-performing model, achieves 64.03 out of 100 and recovers the flag in 59\% of challenges. We also establish a strong human expert baseline of 92.19 points, showing that humans maintain an advantage in cryptographic RE tasks. Our code and dataset are available at https://github.com/wangyu-ovo/CREBench.

CREBench: Evaluating Large Language Models in Cryptographic Binary Reverse Engineering

Abstract

Reverse engineering (RE) is central to software security, particularly for cryptographic programs that handle sensitive data and are highly prone to vulnerabilities. It supports critical tasks such as vulnerability discovery and malware analysis. Despite its importance, RE remains labor-intensive and requires substantial expertise, making large language models (LLMs) a potential solution for automating the process. However, their capabilities for RE remain systematically underexplored. To address this gap, we study the cryptographic binary RE capabilities of LLMs and introduce \textbf{CREBench}, a benchmark comprising 432 challenges built from 48 standard cryptographic algorithms, 3 insecure crypto key usage scenarios, and 3 difficulty levels. Each challenge follows a Capture-the-Flag (CTF) RE challenge, requiring the model to analyze the underlying cryptographic logic and recover the correct input. We design an evaluation framework comprising four sub-tasks, from algorithm identification to correct flag recovery. We evaluate eight frontier LLMs on CREBench. GPT-5.4, the best-performing model, achieves 64.03 out of 100 and recovers the flag in 59\% of challenges. We also establish a strong human expert baseline of 92.19 points, showing that humans maintain an advantage in cryptographic RE tasks. Our code and dataset are available at https://github.com/wangyu-ovo/CREBench.

Paper Structure

This paper contains 64 sections, 2 equations, 11 figures, 7 tables.

Figures (11)

  • Figure 1: Overview of CREBench, which contains 432 challenges based on 48 standard encryption algorithms, three types of insecure key usage, and three levels of reverse-engineering difficulty. We also design an evaluation framework covering four sub-tasks, enabling LLMs to operate as agents that solve these challenges in a sandboxed environment.
  • Figure 2: Comparison of LLMs' performance on CREBench. Pass@3 performance by model is shown, with stacked bars showing sub-task scores, ordered left to right by total score.
  • Figure 3: A successful case: GPT-5.4 solves the AES-128-CBC challenge in 9 rounds. The difficulty is O0, and the key usage strategy is hardcoded. More details are explained in Appendix \ref{['sec:detailed_breakdown_of_figure_3']}.
  • Figure 4: Average pass@3 performance across models under different difficulty settings and Phi correlation among four sub-tasks. Performance drops steadily as difficulty increases from O0 to O3 and further to Const-XOR.
  • Figure 5: Pass@3 perfect rate across eight evaluated models on CREBench. A challenge is counted as perfect only if the model obtains the full score of 100/100, i.e., successfully completes all four tasks within three attempts. GPT-5.4 achieves the highest perfect rate at 41.0% (177/432), followed by GPT-5.2 at 30.1% and Claude-Sonnet-4.6 at 28.9%, while the remaining models achieve substantially lower rates.
  • ...and 6 more figures