Table of Contents
Fetching ...

A Cross-graph Tuning-free GNN Prompting Framework

Yaqi Chen, Shixun Huang, Ryan Twemlow, Lei Wang, John Le, Sheng Wang, Willy Susilo, Jun Yan, Jun Shen

Abstract

GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without further parameter tuning, and thus enables a plug-and-play GNN inference engine. Extensive experiments on few-shot prediction tasks show that, compared to SOTAs, CTP achieves an average accuracy gain of 30.8% and a maximum gain of 54%, confirming its effectiveness and offering a new perspective on graph prompt learning.

A Cross-graph Tuning-free GNN Prompting Framework

Abstract

GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without further parameter tuning, and thus enables a plug-and-play GNN inference engine. Extensive experiments on few-shot prediction tasks show that, compared to SOTAs, CTP achieves an average accuracy gain of 30.8% and a maximum gain of 54%, confirming its effectiveness and offering a new perspective on graph prompt learning.

Paper Structure

This paper contains 12 sections, 15 equations, 7 figures, 4 tables.

Figures (7)

  • Figure 1: An analogy between GNN and NLP prompts
  • Figure 2: The Prompting Framework. The overall workflow includes three stages, illustrated with node classification. During example and query collection, example and query nodes are sampled for each class. Then each sampled node’s context is constructed and augmented from its neighborhood. Finally a prompt graph is built, with context nodes initialized using their contextual embeddings. A learning process refines all node embeddings and all learned parameters will be directly used on unseen test graphs without any further update.
  • Figure 3: Neighborhood Centroid Collection. This figure illustrates the preprocessing step for clustering node embeddings and selecting centroids. (A) Initially, node embeddings are generated by a self-supervised GNN, enabling them to capture neighborhood similarity in the graph. (B) The embeddings are then clustered, and the cluster center nodes together with some randomly sampled nodes are collected to form the neighborhood centroid candidate set. (C) Each centroid represents a class, and example and query nodes are subsequently sampled from the local neighborhood of each centroid for context construction. Unless otherwise specified, all centroids in the following sections refer to neighborhood centroids.
  • Figure 4: Augmentation Strategy Comparison. When sampling context near a shared centroid, subgraphs often overlap. The Overly Independent Approach augments all nodes, including overlapping ones, which disrupts structural consistency and integrity. The Overly Conservative Approach blocks augmentation in these regions, reducing structural diversity. Our Balanced Approach selectively limits augmentation in overlapping areas, preserving consistency while ensuring sufficient diversity and integrity.
  • Figure 5: Average Initialization and Orthogonal Loss. (A) Compared to random initialization, average embedding provides more context-related label representations. (B) Cross-entropy aligns labels with nodes but neglects inter-label relations, often leading to unclear boundaries. Orthogonal loss mitigates this by encouraging label separation.
  • ...and 2 more figures