Analyzing Multi-Head Attention on Trojan BERT Models
Jingwei Wang
TL;DR
The paper tackles the problem of NLP trojan attacks by examining how multi-head attention differs between trojan and benign BERT models in sentiment analysis. It introduces head-level categories—trigger heads, semantic heads, and specific heads—to explain trojan behavior and uses population-level statistics to show these patterns are widespread. Three attention-based detectors are proposed (naive, enumerate-trigger, and reverse-engineering) that leverage limited clean data to distinguish trojan from benign models, achieving near-perfect separation in at least one setting. The findings offer interpretability insights into Trojan NLP models and lay groundwork for practical detection and defense against backdoor attacks in language models.
Abstract
This project investigates the behavior of multi-head attention in Transformer models, specifically focusing on the differences between benign and trojan models in the context of sentiment analysis. Trojan attacks cause models to perform normally on clean inputs but exhibit misclassifications when presented with inputs containing predefined triggers. We characterize attention head functions in trojan and benign models, identifying specific 'trojan' heads and analyzing their behavior.
