PDFA Distillation via String Probability Queries
Robert Baumgartner, Sicco Verwer
TL;DR
PDFA Distillation via String Probability Queries addresses interpretability of sequence models by distilling neural network behavior into probabilistic deterministic finite automata (PDFA). It introduces a string-probability query mechanism that extends the L#/MAT framework, using an observation tree and a red-blue minimization with an error bound to build a compact, deterministic PDFA surrogate. The approach includes convergence guarantees for estimated probabilities and practical strategies for stopping, probability clipping, and counterexample processing. Empirical results on the TAYSIR dataset show PDFA with few states achieve low MSE with relatively small PDFA and competitive runtimes, highlighting its potential for explainable ML and reverse-engineering applications.
Abstract
Probabilistic deterministic finite automata (PDFA) are discrete event systems modeling conditional probabilities over languages: Given an already seen sequence of tokens they return the probability of tokens of interest to appear next. These types of models have gained interest in the domain of explainable machine learning, where they are used as surrogate models for neural networks trained as language models. In this work we present an algorithm to distill PDFA from neural networks. Our algorithm is a derivative of the L# algorithm and capable of learning PDFA from a new type of query, in which the algorithm infers conditional probabilities from the probability of the queried string to occur. We show its effectiveness on a recent public dataset by distilling PDFA from a set of trained neural networks.
