Table of Contents
Fetching ...

Enhancing Structural Mapping with LLM-derived Abstractions for Analogical Reasoning in Narratives

Mohammadhossein Khojasteh, Yifan Jiang, Stefano De Giorgis, Frank van Harmelen, Filip Ilievski

Abstract

Analogical reasoning is a key driver of human generalization in problem-solving and argumentation. Yet, analogies between narrative structures remain challenging for machines. Cognitive engines for structural mapping are not directly applicable, as they assume pre-extracted entities, whereas LLMs' performance is sensitive to prompt format and the degree of surface similarity between narratives. This gap motivates a key question: What is the impact of enhancing structural mapping with LLM-derived abstractions on their analogical reasoning ability in narratives? To that end, we propose a modular framework named YARN (Yielding Abstractions for Reasoning in Narratives), which uses LLMs to decompose narratives into units, abstract these units, and then passes them to a mapping component that aligns elements across stories to perform analogical reasoning. We define and operationalize four levels of abstraction that capture both the general meaning of units and their roles in the story, grounded in prior work on framing. Our experiments reveal that abstractions consistently improve model performance, resulting in competitive or better performance than end-to-end LLM baselines. Closer error analysis reveals the remaining challenges in abstraction at the right level, in incorporating implicit causality, and an emerging categorization of analogical patterns in narratives. YARN enables systematic variation of experimental settings to analyze component contributions, and to support future work, we make the code for YARN openly available.

Enhancing Structural Mapping with LLM-derived Abstractions for Analogical Reasoning in Narratives

Abstract

Analogical reasoning is a key driver of human generalization in problem-solving and argumentation. Yet, analogies between narrative structures remain challenging for machines. Cognitive engines for structural mapping are not directly applicable, as they assume pre-extracted entities, whereas LLMs' performance is sensitive to prompt format and the degree of surface similarity between narratives. This gap motivates a key question: What is the impact of enhancing structural mapping with LLM-derived abstractions on their analogical reasoning ability in narratives? To that end, we propose a modular framework named YARN (Yielding Abstractions for Reasoning in Narratives), which uses LLMs to decompose narratives into units, abstract these units, and then passes them to a mapping component that aligns elements across stories to perform analogical reasoning. We define and operationalize four levels of abstraction that capture both the general meaning of units and their roles in the story, grounded in prior work on framing. Our experiments reveal that abstractions consistently improve model performance, resulting in competitive or better performance than end-to-end LLM baselines. Closer error analysis reveals the remaining challenges in abstraction at the right level, in incorporating implicit causality, and an emerging categorization of analogical patterns in narratives. YARN enables systematic variation of experimental settings to analyze component contributions, and to support future work, we make the code for YARN openly available.

Paper Structure

This paper contains 24 sections, 5 equations, 11 figures, 7 tables.

Figures (11)

  • Figure 1: Structural mapping example derived from ARN dataset narratives sourati-etal-2024-arn. The narratives are decomposed into units, which are then converted into abstractions to capture their roles and general meaning, thus facilitating a structural mapping. $S_B$ and $S_{T2}$ form an analogy, whereas $S_B$ and $S_{T1}$ are disanalogous because their final events are opposed to each other. In this example, the system needs to prioritize far analogy over a near disanalogy, i.e., relational over surface similarity.
  • Figure 2: A high-level overview of the YARN pipeline: we first use LLMs to extract information by identifying units in the two narratives and converting them into abstractions, and then generate a structural one-to-one mapping between the units and abstractions of the two narratives.
  • Figure 3: Story Unit Abstraction. Story events are transformed into abstract representations that capture their underlying functional roles and semantic meaning. By moving beyond surface-level details, these abstractions establish the basis for structural mapping of stories.
  • Figure 4: Structural Mapping. For each pair of stories, all candidate mappings between units (or their abstractions) are generated and assigned similarity scores. A greedy algorithm is then used to generate a one-to-one mapping, producing a final score that reflects the overall structural correspondence between the stories.
  • Figure 5: Different levels of conceptual abstraction affect performance depending on the degree of surface similarity. This figure shows the effect of hierarchical conceptual abstraction levels for Qwen, with and without modifiers. Solid bars denote $A^{con, 0}$, hatched bars $A^{con, 1}$; pale indigo bars use only the root, and indigo bars use both the modifier and the root. The first pale indigo solid bar in each group corresponds to the setting in \ref{['tab:main_table']}.
  • ...and 6 more figures