Multi-LogiEval: Towards Evaluating Multi-Step Logical Reasoning Ability of Large Language Models
Nisarg Patel, Mohith Kulkarni, Mihir Parmar, Aashna Budhiraja, Mutsumi Nakamura, Neeraj Varshney, Chitta Baral
TL;DR
<3-5 sentence high-level summary> Multi-LogiEval introduces a comprehensive, multi-type logical-reasoning benchmark for large language models, spanning Propositional Logic, First-Order Logic, and Non-Monotonic reasoning with over 60 rule combinations across depths 1–5. The dataset comprises ~1.6k NL data instances generated via a two-stage rule-combination and NL-data-generation pipeline, with extensive human validation. Zero-shot chain-of-thought prompting reveals that model accuracy generally diminishes as reasoning depth increases, though non-monotonic reasoning shows distinct trends and NM-enhanced performance with depth. A neural-symbolic case study and detailed qualitative analysis illuminate where LLMs struggle, providing a foundation for future improvements in reasoning, prompting, and alignment strategies; data and resources are released for broader research use.
Abstract
As Large Language Models (LLMs) continue to exhibit remarkable performance in natural language understanding tasks, there is a crucial need to measure their ability for human-like multi-step logical reasoning. Existing logical reasoning evaluation benchmarks often focus primarily on simplistic single-step or multi-step reasoning with a limited set of inference rules. Furthermore, the lack of datasets for evaluating non-monotonic reasoning represents a crucial gap since it aligns more closely with human-like reasoning. To address these limitations, we propose Multi-LogiEval, a comprehensive evaluation dataset encompassing multi-step logical reasoning with various inference rules and depths. Multi-LogiEval covers three logic types--propositional, first-order, and non-monotonic--consisting of more than 30 inference rules and more than 60 of their combinations with various depths. Leveraging this dataset, we conduct evaluations on a range of LLMs including GPT-4, ChatGPT, Gemini-Pro, Yi, Orca, and Mistral, employing a zero-shot chain-of-thought. Experimental results show that there is a significant drop in the performance of LLMs as the reasoning steps/depth increases (average accuracy of ~68% at depth-1 to ~43% at depth-5). We further conduct a thorough investigation of reasoning chains generated by LLMs which reveals several important findings. We believe that Multi-LogiEval facilitates future research for evaluating and enhancing the logical reasoning ability of LLMs. Data is available at https://github.com/Mihir3009/Multi-LogiEval.
