Table of Contents
Fetching ...

Context-Mediated Domain Adaptation in Multi-Agent Sensemaking Systems

Anton Wolter, Leon Haag, Vaishali Dhanoa, Niklas Elmqvist

Abstract

Domain experts possess tacit knowledge that they cannot easily articulate through explicit specifications. When experts modify AI-generated artifacts by correcting terminology, restructuring arguments, and adjusting emphasis, these edits reveal domain understanding that remains latent in traditional prompt-based interactions. Current systems treat such modifications as endpoint corrections rather than as implicit specifications that could reshape subsequent reasoning. We propose context-mediated domain adaptation, a paradigm where user modifications to system-generated artifacts serve as implicit domain specification that reshapes LLM-powered multi-agent reasoning behavior. Through our system Seedentia, a web-based multi-agent framework for sense-making, we demonstrate bidirectional semantic links between generated artifacts and system reasoning. Our approach enables specification bootstrapping where vague initial prompts evolve into precise domain specifications through iterative human-AI collaboration, implicit knowledge transfer through reverse-engineered user edits, and in-context learning where agent behavior adapts based on observed correction patterns. We present results from an evaluation with domain experts who generated and modified research questions from academic papers. Our system extracted 46 domain knowledge entries from user modifications, demonstrating the feasibility of capturing implicit expertise through edit patterns, though the limited sample size constrains conclusions about systematic quality improvements.

Context-Mediated Domain Adaptation in Multi-Agent Sensemaking Systems

Abstract

Domain experts possess tacit knowledge that they cannot easily articulate through explicit specifications. When experts modify AI-generated artifacts by correcting terminology, restructuring arguments, and adjusting emphasis, these edits reveal domain understanding that remains latent in traditional prompt-based interactions. Current systems treat such modifications as endpoint corrections rather than as implicit specifications that could reshape subsequent reasoning. We propose context-mediated domain adaptation, a paradigm where user modifications to system-generated artifacts serve as implicit domain specification that reshapes LLM-powered multi-agent reasoning behavior. Through our system Seedentia, a web-based multi-agent framework for sense-making, we demonstrate bidirectional semantic links between generated artifacts and system reasoning. Our approach enables specification bootstrapping where vague initial prompts evolve into precise domain specifications through iterative human-AI collaboration, implicit knowledge transfer through reverse-engineered user edits, and in-context learning where agent behavior adapts based on observed correction patterns. We present results from an evaluation with domain experts who generated and modified research questions from academic papers. Our system extracted 46 domain knowledge entries from user modifications, demonstrating the feasibility of capturing implicit expertise through edit patterns, though the limited sample size constrains conclusions about systematic quality improvements.

Paper Structure

This paper contains 46 sections, 6 figures, 6 tables.

Figures (6)

  • Figure 1: Interaction modalities. Implementation of interaction modes defined in our Context-Mediated Domain Adaptation framework. These interfaces demonstrate how user modifications are captured and transformed into domain knowledge through bidirectional semantic links, enabling the system to learn from expert corrections and improve subsequent artifact generation.
  • Figure 2: Prompt-based generation. Complete workflow for prompt-based artifact regeneration showing the input dialog for natural language instructions and the asynchronous generation process. The interface maintains application responsiveness during AI processing, demonstrating the fire-and-forget architecture that decouples user interactions from computational workloads.
  • Figure 3: Edit history visualization. The AIContentWrapper component provides integrated edit history functionality that powers context-mediated domain adaptation.
  • Figure 4: Agentic task processing graph. The backend workflow graph is centered on the planner router node, which conditionally dispatches tasks to specialized nodes for paper retrieval, context-based research question generation, and edit-driven knowledge extraction. Node outputs are merged back into a unified state and persisted via the agent tasks infrastructure, enabling asynchronous execution while maintaining traceable bidirectional links between user edits, extracted domain insights, and subsequent generations.
  • Figure 5: Context-mediated domain adaptation workflow tracing. Langfuse tracing demonstrates how the bidirectional learning cycle operates: user modifications flow through the extract_implicit_knowledge node (shown processing three user interactions), with extracted knowledge subsequently injected into the generate_evaluation_questions node's system prompt. Notice how the interface makes the complete knowledge transfer visible, from hierarchical execution flow (center) to detailed prompts and performance metrics (right), enabling validation of our CMDA framework's core claim that user edits systematically enhance AI reasoning.
  • ...and 1 more figures

Theorems & Definitions (3)

  • definition 1: Context-Mediated Domain Adaptation
  • definition 2: Bidirectional Domain-Adaptive Representation
  • definition 3: Adaptive Context Object