Table of Contents
Fetching ...

AI Cosplaying as Astrophysicists: A Controlled Synthetic-Agent Study of AI-Assisted Astrophysical Research Workflows

Chun Huang

Abstract

Large Language Models (LLMs) are now widely used in astrophysics, but do they actually make our lives easier, or do they merely invent new physics with enough confidence to hide a minus sign? In a specialized field where checking fluent hallucinations is itself labor-intensive, AI assistance can demand as much work as the task it claims to simplify. To evaluate where AI genuinely improves scientific workflows, we bypassed human trials and instead forced AI agents to cosplay as astrophysicists. We simulated 144 synthetic researchers, varying in career stage, AI awareness, and willingness to verify outputs, across 2,592 daily astrophysics research assignments. Comparing solo work against four styles of AI assistance produced 12,960 scored episodes. No assisted policy universally outperformed unassisted work in the primary Qwen production run. Instead, performance depends strongly on the task, the style of AI use, and the identity of the actor. While cautious assistance helps on creative, extractive, and critique-oriented tasks, it can fail catastrophically on derivation-heavy physics. A full actor-swap DeepSeek rerun changes that picture materially: verification-heavy use becomes the strongest assisted policy, two assisted modes enter the higher-utility/lower-risk quadrant, and the derivation-heavy fragility that dominates the Qwen production run largely disappears. In its current form, AI is useful, but only conditionally, its value is uneven, task-specific, and shaped jointly by workflow, usage policy, and which LLM you are using.

AI Cosplaying as Astrophysicists: A Controlled Synthetic-Agent Study of AI-Assisted Astrophysical Research Workflows

Abstract

Large Language Models (LLMs) are now widely used in astrophysics, but do they actually make our lives easier, or do they merely invent new physics with enough confidence to hide a minus sign? In a specialized field where checking fluent hallucinations is itself labor-intensive, AI assistance can demand as much work as the task it claims to simplify. To evaluate where AI genuinely improves scientific workflows, we bypassed human trials and instead forced AI agents to cosplay as astrophysicists. We simulated 144 synthetic researchers, varying in career stage, AI awareness, and willingness to verify outputs, across 2,592 daily astrophysics research assignments. Comparing solo work against four styles of AI assistance produced 12,960 scored episodes. No assisted policy universally outperformed unassisted work in the primary Qwen production run. Instead, performance depends strongly on the task, the style of AI use, and the identity of the actor. While cautious assistance helps on creative, extractive, and critique-oriented tasks, it can fail catastrophically on derivation-heavy physics. A full actor-swap DeepSeek rerun changes that picture materially: verification-heavy use becomes the strongest assisted policy, two assisted modes enter the higher-utility/lower-risk quadrant, and the derivation-heavy fragility that dominates the Qwen production run largely disappears. In its current form, AI is useful, but only conditionally, its value is uneven, task-specific, and shaped jointly by workflow, usage policy, and which LLM you are using.

Paper Structure

This paper contains 3 sections, 2 equations, 1 figure.

Figures (1)

  • Figure 1: Overall workflow of the synthetic-agent experiment. A balanced population of AI agent astrophysicists is combined with a broad astrophysics task reservoir and a precomputed assignment table before any model calls are made. Each assignment is then executed under one solo condition and four assisted usage styles, scored under a common judging framework, and aggregated into matched assisted-versus-solo contrasts for the core outcome summary, heterogeneity analyses, usage-style frontier, and cross-model validation.