Table of Contents
Fetching ...

LiteGuard: Efficient Task-Agnostic Model Fingerprinting with Enhanced Generalization

Guang Yang, Ziye Geng, Yihang Chen, Changqing Luo

Abstract

Task-agnostic model fingerprinting has recently gained increasing attention due to its ability to provide a universal framework applicable across diverse model architectures and tasks. The current state-of-the-art method, MetaV, ensures generalization by jointly training a set of fingerprints and a neural-network-based global verifier using two large and diverse model sets: one composed of pirated models (i.e., the protected model and its variants) and the other comprising independently trained models. However, publicly available models are scarce in many real-world domains, and constructing such model sets requires intensive training and massive computational resources, posing a significant barrier to deployment. Reducing the number of models can alleviate the overhead, but increases the risk of overfitting, a problem further exacerbated by MetaV's entangled design, in which all fingerprints and the global verifier are jointly trained. This overfitting issue compromises the generalization capability for verifying unseen models. In this paper, we propose LiteGuard, an efficient task-agnostic fingerprinting framework that attains enhanced generalization while significantly lowering computational cost. Specifically, LiteGuard introduces two key innovations: (i) a checkpoint-based model set augmentation strategy that enriches model diversity by leveraging intermediate model snapshots captured during training of each pirated and independently trained model, thereby alleviating the need to train a large number of such models, and (ii) a local verifier architecture that pairs each fingerprint with a lightweight local verifier, thereby reducing parameter entanglement and mitigating overfitting. Extensive experiments across five representative tasks show that LiteGuard consistently outperforms MetaV in both generalization performance and computational efficiency.

LiteGuard: Efficient Task-Agnostic Model Fingerprinting with Enhanced Generalization

Abstract

Task-agnostic model fingerprinting has recently gained increasing attention due to its ability to provide a universal framework applicable across diverse model architectures and tasks. The current state-of-the-art method, MetaV, ensures generalization by jointly training a set of fingerprints and a neural-network-based global verifier using two large and diverse model sets: one composed of pirated models (i.e., the protected model and its variants) and the other comprising independently trained models. However, publicly available models are scarce in many real-world domains, and constructing such model sets requires intensive training and massive computational resources, posing a significant barrier to deployment. Reducing the number of models can alleviate the overhead, but increases the risk of overfitting, a problem further exacerbated by MetaV's entangled design, in which all fingerprints and the global verifier are jointly trained. This overfitting issue compromises the generalization capability for verifying unseen models. In this paper, we propose LiteGuard, an efficient task-agnostic fingerprinting framework that attains enhanced generalization while significantly lowering computational cost. Specifically, LiteGuard introduces two key innovations: (i) a checkpoint-based model set augmentation strategy that enriches model diversity by leveraging intermediate model snapshots captured during training of each pirated and independently trained model, thereby alleviating the need to train a large number of such models, and (ii) a local verifier architecture that pairs each fingerprint with a lightweight local verifier, thereby reducing parameter entanglement and mitigating overfitting. Extensive experiments across five representative tasks show that LiteGuard consistently outperforms MetaV in both generalization performance and computational efficiency.

Paper Structure

This paper contains 28 sections, 1 equation, 7 figures, 8 tables.

Figures (7)

  • Figure 1: A typical model fingerprinting scenario.
  • Figure 2: The overview of LiteGuard.
  • Figure 3: (a) The ROC curve and (b) the confidence score distribution.
  • Figure 4: The AUCs achieved by MetaV and LiteGuard on (a) the molecular property prediction task (GNN/QM9) and (b) the protein property regression task (MLP/CASP).
  • Figure 5: The AUCs under varying checkpoint selection intervals $l$ for (a) tabular data generation task (AE/CH) and (b) time-series sequence generation task (RNN/Weather). The AUCs under varying start epoch $e_{s}$ for (c) tabular data generation task (AE/CH) and (d) time-series sequence generation task (RNN/Weather).
  • ...and 2 more figures