Table of Contents
Fetching ...

LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic

Aditya Kalyanpur, Kailash Karthik Saravanakumar, Victor Barres, Jennifer Chu-Carroll, David Melville, David Ferrucci

TL;DR

This work introduces LLM-ARC, a neuro-symbolic framework that couples a large language model (as Actor) with an Automated Reasoning Critic to tackle complex natural-language logical reasoning. The Actor generates declarative ASP code and semantic tests, while the Critic (an ASP solver) executes the code, validates the tests, and provides detailed explanations to guide iterative refinement. A novel test-generation regime for declarative logic, combined with end-to-end self-supervised training on dialog traces, yields a new state-of-the-art 88.32% accuracy on the FOLIO benchmark, significantly outperforming LLM-only baselines. The approach demonstrates robust, explainable reasoning and outlines concrete avenues for enhancements, such as improved critic grounding and scalable input handling, paving the way for production-ready neuro-symbolic reasoning systems.

Abstract

We introduce LLM-ARC, a neuro-symbolic framework designed to enhance the logical reasoning capabilities of Large Language Models (LLMs), by combining them with an Automated Reasoning Critic (ARC). LLM-ARC employs an Actor-Critic method where the LLM Actor generates declarative logic programs along with tests for semantic correctness, while the Automated Reasoning Critic evaluates the code, runs the tests and provides feedback on test failures for iterative refinement. Implemented using Answer Set Programming (ASP), LLM-ARC achieves a new state-of-the-art accuracy of 88.32% on the FOLIO benchmark which tests complex logical reasoning capabilities. Our experiments demonstrate significant improvements over LLM-only baselines, highlighting the importance of logic test generation and iterative self-refinement. We achieve our best result using a fully automated self-supervised training loop where the Actor is trained on end-to-end dialog traces with Critic feedback. We discuss potential enhancements and provide a detailed error analysis, showcasing the robustness and efficacy of LLM-ARC for complex natural language reasoning tasks.

LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic

TL;DR

This work introduces LLM-ARC, a neuro-symbolic framework that couples a large language model (as Actor) with an Automated Reasoning Critic to tackle complex natural-language logical reasoning. The Actor generates declarative ASP code and semantic tests, while the Critic (an ASP solver) executes the code, validates the tests, and provides detailed explanations to guide iterative refinement. A novel test-generation regime for declarative logic, combined with end-to-end self-supervised training on dialog traces, yields a new state-of-the-art 88.32% accuracy on the FOLIO benchmark, significantly outperforming LLM-only baselines. The approach demonstrates robust, explainable reasoning and outlines concrete avenues for enhancements, such as improved critic grounding and scalable input handling, paving the way for production-ready neuro-symbolic reasoning systems.

Abstract

We introduce LLM-ARC, a neuro-symbolic framework designed to enhance the logical reasoning capabilities of Large Language Models (LLMs), by combining them with an Automated Reasoning Critic (ARC). LLM-ARC employs an Actor-Critic method where the LLM Actor generates declarative logic programs along with tests for semantic correctness, while the Automated Reasoning Critic evaluates the code, runs the tests and provides feedback on test failures for iterative refinement. Implemented using Answer Set Programming (ASP), LLM-ARC achieves a new state-of-the-art accuracy of 88.32% on the FOLIO benchmark which tests complex logical reasoning capabilities. Our experiments demonstrate significant improvements over LLM-only baselines, highlighting the importance of logic test generation and iterative self-refinement. We achieve our best result using a fully automated self-supervised training loop where the Actor is trained on end-to-end dialog traces with Critic feedback. We discuss potential enhancements and provide a detailed error analysis, showcasing the robustness and efficacy of LLM-ARC for complex natural language reasoning tasks.

Paper Structure

This paper contains 23 sections, 13 figures, 2 tables.

Figures (13)

  • Figure 1: LLM-ARC Implementation based on Answer Set Programming (ASP): Given a problem description (a collection of natural language statements), the Actor (LLM) generates ASP code and tests in an iterative manner. At each step, the Actor takes as input the next segment (problem intents) to convert to ASP , along with the existing ASP code and tests generated so far, and outputs the updated code and tests based on the latest segment. The code is then run by the Critic (ASP Solver) and any test failures with explanations are fed back to the Actor. This self-correction loop runs till all tests-pass or max-iterations are reached. The Actor is eventually trained on end-to-end dialog traces with the Critic feedback in this self-correction loop.
  • Figure 2: Logic Stratification of NL Statements in FOLIO
  • Figure 3: Training Example for NL to ASP used in the prompt (In-Context Learning)
  • Figure 4: Test Guidelines for the various logic classes with examples
  • Figure 5: Impact of Iterative Self-Correction Over multiple iterations, we see overall LLM-ARC system accuracy go up, as more ASP programs fully compile and pass more tests. The chart on the left shows system accuracy over multiple iterations for the various LLM-ARC system variants. The two tables on the right show additional statistics around generated tests, code compilation and test passing for the LLM-8-shot and Trained version.
  • ...and 8 more figures