Table of Contents
Fetching ...

Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering

Chenyu Zhou, Huacan Chai, Wenteng Chen, Zihan Guo, Rong Shan, Yuanyi Song, Tianyi Xu, Yingxuan Yang, Aofan Yu, Weiming Zhang, Congming Zheng, Jiachen Zhu, Zeyu Zheng, Zhuosheng Zhang, Xingyu Lou, Changwang Zhang, Zhihui Fu, Jun Wang, Weiwen Liu, Jianghao Lin, Weinan Zhang

Abstract

Large language model (LLM) agents are increasingly built less by changing model weights than by reorganizing the runtime around them. Capabilities that earlier systems expected the model to recover internally are now externalized into memory stores, reusable skills, interaction protocols, and the surrounding harness that makes these modules reliable in practice. This paper reviews that shift through the lens of externalization. Drawing on the idea of cognitive artifacts, we argue that agent infrastructure matters not merely because it adds auxiliary components, but because it transforms hard cognitive burdens into forms that the model can solve more reliably. Under this view, memory externalizes state across time, skills externalize procedural expertise, protocols externalize interaction structure, and harness engineering serves as the unification layer that coordinates them into governed execution. We trace a historical progression from weights to context to harness, analyze memory, skills, and protocols as three distinct but coupled forms of externalization, and examine how they interact inside a larger agent system. We further discuss the trade-off between parametric and externalized capability, identify emerging directions such as self-evolving harnesses and shared agent infrastructure, and discuss open challenges in evaluation, governance, and the long-term co-evolution of models and external infrastructure. The result is a systems-level framework for explaining why practical agent progress increasingly depends not only on stronger models, but on better external cognitive infrastructure.

Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering

Abstract

Large language model (LLM) agents are increasingly built less by changing model weights than by reorganizing the runtime around them. Capabilities that earlier systems expected the model to recover internally are now externalized into memory stores, reusable skills, interaction protocols, and the surrounding harness that makes these modules reliable in practice. This paper reviews that shift through the lens of externalization. Drawing on the idea of cognitive artifacts, we argue that agent infrastructure matters not merely because it adds auxiliary components, but because it transforms hard cognitive burdens into forms that the model can solve more reliably. Under this view, memory externalizes state across time, skills externalize procedural expertise, protocols externalize interaction structure, and harness engineering serves as the unification layer that coordinates them into governed execution. We trace a historical progression from weights to context to harness, analyze memory, skills, and protocols as three distinct but coupled forms of externalization, and examine how they interact inside a larger agent system. We further discuss the trade-off between parametric and externalized capability, identify emerging directions such as self-evolving harnesses and shared agent infrastructure, and discuss open challenges in evaluation, governance, and the long-term co-evolution of models and external infrastructure. The result is a systems-level framework for explaining why practical agent progress increasingly depends not only on stronger models, but on better external cognitive infrastructure.

Paper Structure

This paper contains 129 sections, 8 figures.

Figures (8)

  • Figure 1: Externalization as the organizing principle of LLM agent design. Upper panel: The arc of human cognitive externalization from thought through language, writing, printing, to digital computation. Middle panel: The corresponding externalization arc for LLM agents, from weights through three externalization dimensions---Memory (externalized state), Skills (externalized expertise), and Protocols (externalized interaction)---to the Harness that unifies them. Lower panel: A literature landscape mapping representative works onto three capability layers---Weights, Context, and Harness---illustrating how research threads have progressively migrated outward. The parallel between the two arcs encodes a recursive claim: LLM agents achieve reliable agency by externalizing cognitive burdens along the same representational dimensions that have driven human cognitive history.
  • Figure 2: Community theme evolution across three capability layers. The stacked layers---Weights, Context, and Harness---show how the center of gravity in the LLM agent community has shifted outward over time, from parametric knowledge and prompting toward harness-level infrastructure such as tool ecosystems, protocols, skills, and multi-agent orchestration.
  • Figure 3: Externalization architecture of a harnessed LLM agent. The Harness sits at the center; three externalization dimensions---Memory (working context, semantic knowledge, episodic experience, personalized memory), Skills (operational procedures, decision heuristics, normative constraints), and Protocols (agent--user, agent--agent, agent--tools)---orbit around it. Operational elements such as sandboxing, observability, compression, evaluation, approval loops, and sub-agent orchestration mediate the interaction between the harness core and the externalized modules.
  • Figure 4: Memory as externalized state. Raw context from the ephemeral context window and environment feedback is converted into four persistent memory dimensions---working context, episodic experience, semantic knowledge, and personalized memory. These dimensions are organized through progressively more managed architectures: monolithic context, retrieval stores, hierarchical orchestration (with extraction, consolidation, forgetting, and OS-style hot/cold swapping), and adaptive memory systems (with dynamic modules and feedback-based strategy optimization via MOE, RL, etc.). On the harness side, execution traces from skills and protocols flow into externalized memory, which in turn supplies task-relevant content back to the agent core through direct recall and curated snapshots.
  • Figure 5: Skills as externalized expertise. The figure traces the full lifecycle of a skill through three phases---invocation, selection, and procedure. Skill Acquisition shows four pathways by which procedural know-how enters the system: authored by experts, distilled from episodic memory and trajectories, discovered through environment exploration and self-induction, or composed from existing units. Skill Artifact packages that know-how into operational procedures, decision heuristics, and normative constraints, accompanied by a manifest declaring capabilities, preconditions, and scope. Activation Pipeline handles registry-based discovery via semantic abstraction, progressive disclosure from abstract summaries to full guides, and composition that binds skills to tools, APIs, files, agents, and protocols. Runtime shows how the active context and the LLM execute the selected skill, while boundary conditions---staleness, portability limits, context-dependent degradation, and unsafe composition---constrain reliability.
  • ...and 3 more figures