Table of Contents
Fetching ...

Compositional Neuro-Symbolic Reasoning

Anugyan Das, Omkar Ghugarkar, Vishvesh Bhat, Asad Aali

Abstract

We study structured abstraction-based reasoning for the Abstraction and Reasoning Corpus (ARC) and compare its generalization to test-time approaches. Purely neural architectures lack reliable combinatorial generalization, while strictly symbolic systems struggle with perceptual grounding. We therefore propose a neuro-symbolic architecture that extracts object-level structure from grids, uses neural priors to propose candidate transformations from a fixed domain-specific language (DSL) of atomic patterns, and filters hypotheses using cross-example consistency. Instantiated as a compositional reasoning framework based on unit patterns inspired by human visual abstraction, the system augments large language models (LLMs) with object representations and transformation proposals. On ARC-AGI-2, it improves base LLM performance from 16% to 24.4% on the public evaluation set, and to 30.8% when combined with ARC Lang Solver via a meta-classifier. These results demonstrate that separating perception, neural-guided transformation proposal, and symbolic consistency filtering improves generalization without task-specific finetuning or reinforcement learning, while reducing reliance on brute-force search and sampling-based test-time scaling. We open-source the ARC-AGI-2 Reasoner code (https://github.com/CoreThink-AI/arc-agi-2-reasoner).

Compositional Neuro-Symbolic Reasoning

Abstract

We study structured abstraction-based reasoning for the Abstraction and Reasoning Corpus (ARC) and compare its generalization to test-time approaches. Purely neural architectures lack reliable combinatorial generalization, while strictly symbolic systems struggle with perceptual grounding. We therefore propose a neuro-symbolic architecture that extracts object-level structure from grids, uses neural priors to propose candidate transformations from a fixed domain-specific language (DSL) of atomic patterns, and filters hypotheses using cross-example consistency. Instantiated as a compositional reasoning framework based on unit patterns inspired by human visual abstraction, the system augments large language models (LLMs) with object representations and transformation proposals. On ARC-AGI-2, it improves base LLM performance from 16% to 24.4% on the public evaluation set, and to 30.8% when combined with ARC Lang Solver via a meta-classifier. These results demonstrate that separating perception, neural-guided transformation proposal, and symbolic consistency filtering improves generalization without task-specific finetuning or reinforcement learning, while reducing reliance on brute-force search and sampling-based test-time scaling. We open-source the ARC-AGI-2 Reasoner code (https://github.com/CoreThink-AI/arc-agi-2-reasoner).

Paper Structure

This paper contains 31 sections, 31 equations, 2 figures, 4 tables.

Figures (2)

  • Figure 1: Neuro-symbolic reasoning pipeline for ARC. The system first extracts object-level representations from input grids, including connected components and structured attributes. A neural prior proposes candidate transformations from a constrained DSL. Candidate rules are then filtered for cross-example consistency before being forwarded to test-time solution generation.
  • Figure 2: Hierarchy of atomic visual reasoning patterns used in compositional ARC solving. Each pattern corresponds to a primitive operation within a constrained DSL, parameterized by object attributes such as color, position, or connectivity. The hierarchy illustrates how complex transformations are composed from a small set of reusable units, enabling systematic generalization while restricting combinatorial search.