Table of Contents
Fetching ...

GEAKG: Generative Executable Algorithm Knowledge Graphs

Camilo Chacón Sartori, José H. García, Andrei Voicu Tomut, Christian Blum

Abstract

In the context of algorithms for problem solving, procedural knowledge -- the know-how of algorithm design and operator composition -- remains implicit in code, lost between runs, and must be re-engineered for each new domain. Knowledge graphs (KGs) have proven effective for organizing declarative knowledge, yet current KG paradigms provide limited support for representing procedural knowledge as executable, learnable graph structures. We introduce \textit{Generative Executable Algorithm Knowledge Graphs} (GEAKG), a class of KGs whose nodes store executable operators, whose edges encode learned composition patterns, and whose traversal generates solutions. A GEAKG is \emph{generative} (topology and operators are synthesized by a Large Language Model), \emph{executable} (every node is runnable code), and \emph{transferable} (learned patterns generalize zero-shot across domains). The framework is domain-agnostic at the engine level: the same three-layer architecture and Ant Colony Optimization (ACO)-based learning engine can be instantiated across domains, parameterized by a pluggable ontology (\texttt{RoleSchema}). Two case studies -- sharing no domain-specific framework code -- provide concrete evidence for this framework hypothesis: (1)~Neural Architecture Search across 70 cross-dataset transfer pairs on two tabular benchmarks, and (2)~Combinatorial Optimization, where knowledge learned on the Traveling Salesman Problem transfers zero-shot to scheduling and assignment domains. Taken together, the results support that algorithmic expertise can be explicitly represented, learned, and transferred as executable knowledge graphs.

GEAKG: Generative Executable Algorithm Knowledge Graphs

Abstract

In the context of algorithms for problem solving, procedural knowledge -- the know-how of algorithm design and operator composition -- remains implicit in code, lost between runs, and must be re-engineered for each new domain. Knowledge graphs (KGs) have proven effective for organizing declarative knowledge, yet current KG paradigms provide limited support for representing procedural knowledge as executable, learnable graph structures. We introduce \textit{Generative Executable Algorithm Knowledge Graphs} (GEAKG), a class of KGs whose nodes store executable operators, whose edges encode learned composition patterns, and whose traversal generates solutions. A GEAKG is \emph{generative} (topology and operators are synthesized by a Large Language Model), \emph{executable} (every node is runnable code), and \emph{transferable} (learned patterns generalize zero-shot across domains). The framework is domain-agnostic at the engine level: the same three-layer architecture and Ant Colony Optimization (ACO)-based learning engine can be instantiated across domains, parameterized by a pluggable ontology (\texttt{RoleSchema}). Two case studies -- sharing no domain-specific framework code -- provide concrete evidence for this framework hypothesis: (1)~Neural Architecture Search across 70 cross-dataset transfer pairs on two tabular benchmarks, and (2)~Combinatorial Optimization, where knowledge learned on the Traveling Salesman Problem transfers zero-shot to scheduling and assignment domains. Taken together, the results support that algorithmic expertise can be explicitly represented, learned, and transferred as executable knowledge graphs.

Paper Structure

This paper contains 96 sections, 9 equations, 14 figures, 23 tables, 2 algorithms.

Figures (14)

  • Figure 1: End-to-end GEAKG pipeline. The offline phase generates MetaGraph topology (L0) and executable operators (L1) via LLM, then learns pheromones and symbolic rules (L2) via ACO. The complete knowledge is serialized as a GEAKG snapshot ($\sim$1--3 KB JSON). The online phase deploys the snapshot through a Symbolic Executor requiring zero LLM calls. Transfer to new domains requires only changing the domain binding (ctx).
  • Figure 2: GEAKG MetaGraph structures for two case studies, demonstrating framework generality. (a) Neural Architecture Search: 18 roles in 5 categories following the NAS design pipeline---Topology defines structure, Activation selects functions, Training configures optimization, Regularization prevents overfitting, Evaluation measures quality. Dashed feedback arrows from Evaluation enable iterative redesign. (b) Combinatorial optimization: 11 roles in 3 categories---Construction builds initial solutions, Local Search improves them, Perturbation escapes local optima. Dashed arrows show re-optimization after perturbation. Both graphs are traversed by the identical ACO engine with pheromone-weighted path selection---no framework code differs between cases.
  • Figure 3: Toy GEAKG: 3 roles, 2 categories, learned pheromones. Thick edge = learned preference for 2opt. The snapshot transfers to a new domain by swapping only the evaluation function.
  • Figure 4: Transfer mechanism: The complete GEAKG snapshot (L0 topology + L1 operators + L2 symbolic rules) learned in the context of the TSP transfers directly to QAP. Only the domain binding (how ctx.evaluate() computes fitness) changes between domains. No LLM calls during online execution.
  • Figure 5: Scalability on QAP (shaded region: $n > 100$). Transferred knowledge enables stable performance where generic search degrades.
  • ...and 9 more figures

Theorems & Definitions (1)

  • Definition 3.1: Generative Executable Algorithm Knowledge Graph