Table of Contents
Fetching ...

MatClaw: An Autonomous Code-First LLM Agent for End-to-End Materials Exploration

Chenmu Zhang, Boris I. Yakobson

Abstract

Existing LLM agents for computational materials science are constrained by pipeline-bounded architectures tied to specific simulation codes and by dependence on manually written tool functions that grow with task scope. We present MatClaw, a code-first agent that writes and executes Python directly, composing any installed domain library to orchestrate multi-code workflows on remote HPC clusters without predefined tool functions. To sustain coherent execution across multi-day workflows, MatClaw uses a four-layer memory architecture that prevents progressive context loss, and retrieval-augmented generation over domain source code that raises per-step API-call accuracy to ${\sim}$99 %. Three end-to-end demonstrations on ferroelectric CuInP2S6 (machine-learning force field training via active learning, Curie temperature prediction, and heuristic parameter-space search) reveal that the agent handles code generation reliably but struggles with tacit domain knowledge. The missing knowledge, such as appropriate simulation timescales, equilibration protocols, and sampling strategies, is the kind that researchers accumulate through experience but rarely formalize. Two lightweight interventions, literature self-learning and expert-specified constraints, bridge these gaps, defining a guided autonomy model in which the researcher provides high-level domain knowledge while the agent handles workflow execution. Our results demonstrate that the gap between guided and fully autonomous computational materials research is narrower than ever before: LLMs already handle code generation and scientific interpretation reliably, and the rapid improvement in their capabilities will accelerate materials discovery beyond what manual workflows can achieve. All code and benchmarks are open-source.

MatClaw: An Autonomous Code-First LLM Agent for End-to-End Materials Exploration

Abstract

Existing LLM agents for computational materials science are constrained by pipeline-bounded architectures tied to specific simulation codes and by dependence on manually written tool functions that grow with task scope. We present MatClaw, a code-first agent that writes and executes Python directly, composing any installed domain library to orchestrate multi-code workflows on remote HPC clusters without predefined tool functions. To sustain coherent execution across multi-day workflows, MatClaw uses a four-layer memory architecture that prevents progressive context loss, and retrieval-augmented generation over domain source code that raises per-step API-call accuracy to 99 %. Three end-to-end demonstrations on ferroelectric CuInP2S6 (machine-learning force field training via active learning, Curie temperature prediction, and heuristic parameter-space search) reveal that the agent handles code generation reliably but struggles with tacit domain knowledge. The missing knowledge, such as appropriate simulation timescales, equilibration protocols, and sampling strategies, is the kind that researchers accumulate through experience but rarely formalize. Two lightweight interventions, literature self-learning and expert-specified constraints, bridge these gaps, defining a guided autonomy model in which the researcher provides high-level domain knowledge while the agent handles workflow execution. Our results demonstrate that the gap between guided and fully autonomous computational materials research is narrower than ever before: LLMs already handle code generation and scientific interpretation reliably, and the rapid improvement in their capabilities will accelerate materials discovery beyond what manual workflows can achieve. All code and benchmarks are open-source.

Paper Structure

This paper contains 27 sections, 7 figures, 6 tables.

Figures (7)

  • Figure 1: MatClaw architecture. The researcher provides a task description in natural language. The LLM-driven agent generates Python code that composes domain libraries (pymatgen, atomate2, jobflow, etc.), which in turn submit jobs to remote HPC backends (VASP, DeePMD-kit, LAMMPS, etc.) and return computational results. The agent does not directly interact with the backends. File-based long-term memory and a database of computational results provide persistent state across steps and sessions.
  • Figure 2: Ferroelectric order parameter $Q(T) = \langle |\eta(t)| \rangle$ of monolayer CIPS from DeePMD MD, produced autonomously by MatClaw. Inset: side view of the CuInP2S6 monolayer structure. Open squares show the initial 60 ps sweep (last 30 ps averaged); filled circles show the final data after extending near-transition temperatures to 100 ps. The dashed line marks the estimated $T_\mathrm{c} = 261$ K. Error bars are block-averaged standard errors. The 6$\times$6$\times$1 supercell (360 atoms) was used for all simulations. This figure was generated autonomously by the MatClaw agent.
  • Figure 3: Agent-driven heuristic search through $(E, T)$ parameter space. Each point represents one E-field MD simulation on a 1$\times$25$\times$1 CIPS supercell (500 atoms). Color indicates the domino metric (slope of $\langle |{\Delta}t(d)| \rangle$ vs. site separation $d$). Gray crosses mark conditions where fewer than 30% of Cu sites flipped. The blue-circled point ($E_z = -0.16$ V/Å, $T = 50$ K, slope = 0.32 ps/site) is the best condition found. The dotted line marks the revised $T_\mathrm{c} \approx 261$ K from Task 2. This figure was generated autonomously by the MatClaw agent.
  • Figure 4: Domain wall propagation at the optimal condition ($E_z = -0.16$ V/Å, $T = 50$ K). Left: space-time heatmap of Cu displacement from the host midplane, with Cu sites sorted by $b$-axis position. The diagonal pattern indicates sequential flipping propagating along the chain. Stars mark the first-flip time for each site. Right: first-flip time vs. site index, showing the approximately linear relationship that defines domain wall propagation (slope = 0.32 ps/site). Both panels were generated autonomously by the MatClaw agent.
  • Figure 5: Chunking method comparison on pymatgen code QA (300 questions, Gemini 3.0 Flash, BM25 retrieval). Code-chunk achieves the highest accuracy (97.0%) at both chunk sizes, outperforming fixed-width and cAST by 1--3 percentage points.
  • ...and 2 more figures