Table of Contents
Fetching ...

$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution

Muyu He, Adit Jain, Anand Kumar, Vincent Tu, Soumyadeep Bakshi, Sachin Patro, Nazneen Rajani

Abstract

As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce $\texttt{YC-Bench}$, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \$200K, with Claude Opus 4.6 achieving the highest average final funds at \$1.27 M, followed by GLM-5 at \$1.21 M at 11$\times$ lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for $47\%$ of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. $\texttt{YC-Bench}$ is open-source, reproducible, and configurable.

$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution

Abstract

As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce , a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \1.27 M, followed by GLM-5 at \\times47\%\texttt{YC-Bench}$ is open-source, reproducible, and configurable.

Paper Structure

This paper contains 29 sections, 10 figures, 4 tables.

Figures (10)

  • Figure 1: Overview of YC-Bench. The agent interacts with the environment through CLI commands (blue) and receives structured observations (green). The environment tracks observable state (tasks, employees, finance, prestige, client trust) and one hidden element: adversarial clients whose work inflation must be inferred from repeated task failures.
  • Figure 2: Out of the $12$ models that we benchmark on YC-Bench, $5$ models are profitable and only $3$ turn a substantial profit ($5\times$ profit). The figure plots the funds across time averaged across three seeds for each model. Comprehensive results can be found in Appendix \ref{['app:main_result_stats']}.
  • Figure 3: We observe that better models are able to build client trust over time by strategically selecting clients. What is surprising is smaller distilled models (Sonnet-4.6) do worse than contemporaries (Gemini-3-Flash) unlike VB.
  • Figure 4: Analyzing how the models deal with adversarial clients, who have appealing rewards when accepting a task but have a lot more work than claimed (scope creep).
  • Figure 5: Other than identifying adversarial clients there are two other primary task failure modes, especially wrong employee assignment. We observe that Kimi-K.5 is the most cost efficient, whereas the second highest ranking model, GLM-5 is substantially ($10 \times$) better than Opus-4.6 on cost efficiency.
  • ...and 5 more figures