Table of Contents
Fetching ...

GraphEval36K: Benchmarking Coding and Reasoning Capabilities of Large Language Models on Graph Datasets

Qiming Wu, Zichen Chen, Will Corcoran, Misha Sra, Ambuj K. Singh

TL;DR

GraphEval36K presents the first graph-focused coding benchmark, combining 40 problems and 36,900 test cases to evaluate large language models on graph-solving tasks. It introduces Structured Symbolic Decomposition (SSD), an instruction-based decomposition method that boosts LLM reasoning by splitting problems into cognitive and action steps, and demonstrates gains across multiple models, especially on challenging graph types. The study shows private LLMs generally outperform open-source ones, with SSD further narrowing gaps and delivering substantial improvements on complex graphs. Together, GraphEval36K and SSD offer a rigorous framework for analyzing LLM graph cognition, guiding future research in graph problem solving and program synthesis for graphs.

Abstract

Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), demonstrating significant capabilities in processing and understanding text data. However, recent studies have identified limitations in LLMs' ability to manipulate, program, and reason about structured data, especially graphs. We introduce GraphEval36K, the first comprehensive graph dataset, comprising 40 graph coding problems and 36,900 test cases to evaluate the ability of LLMs on graph problem-solving. Our dataset is categorized into eight primary and four sub-categories to ensure a thorough evaluation across different types of graphs. We benchmark ten LLMs, finding that private models outperform open-source ones, though the gap is narrowing. We also analyze the performance of LLMs across directed vs undirected graphs, different kinds of graph concepts, and network models. Furthermore, to improve the usability of our evaluation framework, we propose Structured Symbolic Decomposition (SSD), an instruction-based method designed to enhance LLM performance on complex graph tasks. Results show that SSD improves the average passing rate of GPT-4, GPT-4o, Gemini-Pro and Claude-3-Sonnet by 8.38%, 6.78%, 29.28% and 25.28%, respectively.

GraphEval36K: Benchmarking Coding and Reasoning Capabilities of Large Language Models on Graph Datasets

TL;DR

GraphEval36K presents the first graph-focused coding benchmark, combining 40 problems and 36,900 test cases to evaluate large language models on graph-solving tasks. It introduces Structured Symbolic Decomposition (SSD), an instruction-based decomposition method that boosts LLM reasoning by splitting problems into cognitive and action steps, and demonstrates gains across multiple models, especially on challenging graph types. The study shows private LLMs generally outperform open-source ones, with SSD further narrowing gaps and delivering substantial improvements on complex graphs. Together, GraphEval36K and SSD offer a rigorous framework for analyzing LLM graph cognition, guiding future research in graph problem solving and program synthesis for graphs.

Abstract

Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), demonstrating significant capabilities in processing and understanding text data. However, recent studies have identified limitations in LLMs' ability to manipulate, program, and reason about structured data, especially graphs. We introduce GraphEval36K, the first comprehensive graph dataset, comprising 40 graph coding problems and 36,900 test cases to evaluate the ability of LLMs on graph problem-solving. Our dataset is categorized into eight primary and four sub-categories to ensure a thorough evaluation across different types of graphs. We benchmark ten LLMs, finding that private models outperform open-source ones, though the gap is narrowing. We also analyze the performance of LLMs across directed vs undirected graphs, different kinds of graph concepts, and network models. Furthermore, to improve the usability of our evaluation framework, we propose Structured Symbolic Decomposition (SSD), an instruction-based method designed to enhance LLM performance on complex graph tasks. Results show that SSD improves the average passing rate of GPT-4, GPT-4o, Gemini-Pro and Claude-3-Sonnet by 8.38%, 6.78%, 29.28% and 25.28%, respectively.

Paper Structure

This paper contains 45 sections, 8 equations, 16 figures, 7 tables.

Figures (16)

  • Figure 1: Overview of the Evaluation Framework. For each problem, we input problem statement, data examples, and code framework to LLMs. The LLMs generate the corresponding code and provide explanations. Finally, we evaluate the code on GraphEval36K and return the score details.
  • Figure 2: The GraphEval36K dataset is constructed through a pipeline that begins with data collection from code contests (LeetCode). Next, problems are randomly sampled according to their difficulty levels, and corresponding graphs are generated using NetworkX. These graphs are then clustered and labeled based on whether they are connected (c), disconnected (dc), cyclic (cy), or acyclic (acy). Verification steps ensure labeling accuracy, though the exact labels may vary depending on each graph's characteristics.
  • Figure 3: Distribution of graph problems on concepts and difficulty levels.
  • Figure 4: Structure of GraphEval36K. "U" denotes undirected graphs, "D" denotes directed graphs, with numbers indicating the count of cases in each category. The graphs are classified into eight main categories: sparse, planar, regular, dense, complete, Small-world, Erdos-Renyi, and Power-law. Some are further divided into four sub-categories: connected, disconnected, cyclic, and acyclic. Sub-categories may vary based on the characteristics of the main categories. Detailed dataset analysis is shown in Appendix \ref{['section: analysis_of_dataset']}.
  • Figure 5: Average passing rate on sparse and planar graphs. We mark the absolute value of the difference between results of directed and undirected graphs.
  • ...and 11 more figures