Table of Contents
Fetching ...

Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models

Bohan Jiang, Chengshuai Zhao, Zhen Tan, Huan Liu

TL;DR

This work proposes DELD (Detecting Evolving LLM-generated Disinformation), a parameter-efficient approach that jointly leverages the general fact-checking capabilities of pre-trained language models (PLM) and the independent disinformation generation characteristics of various LLMs to facilitate knowledge accumulation and transformation.

Abstract

Despite recent advancements in detecting disinformation generated by large language models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this work, we investigate a challenging yet practical research problem of detecting evolving LLM-generated disinformation. Disinformation evolves constantly through the rapid development of LLMs and their variants. As a consequence, the detection model faces significant challenges. First, it is inefficient to train separate models for each disinformation generator. Second, the performance decreases in scenarios when evolving LLM-generated disinformation is encountered in sequential order. To address this problem, we propose DELD (Detecting Evolving LLM-generated Disinformation), a parameter-efficient approach that jointly leverages the general fact-checking capabilities of pre-trained language models (PLM) and the independent disinformation generation characteristics of various LLMs. In particular, the learned characteristics are concatenated sequentially to facilitate knowledge accumulation and transformation. DELD addresses the issue of label scarcity by integrating the semantic embeddings of disinformation with trainable soft prompts to elicit model-specific knowledge. Our experiments show that \textit{DELD} significantly outperforms state-of-the-art methods. Moreover, our method provides critical insights into the unique patterns of disinformation generation across different LLMs, offering valuable perspectives in this line of research.

Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models

TL;DR

This work proposes DELD (Detecting Evolving LLM-generated Disinformation), a parameter-efficient approach that jointly leverages the general fact-checking capabilities of pre-trained language models (PLM) and the independent disinformation generation characteristics of various LLMs to facilitate knowledge accumulation and transformation.

Abstract

Despite recent advancements in detecting disinformation generated by large language models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this work, we investigate a challenging yet practical research problem of detecting evolving LLM-generated disinformation. Disinformation evolves constantly through the rapid development of LLMs and their variants. As a consequence, the detection model faces significant challenges. First, it is inefficient to train separate models for each disinformation generator. Second, the performance decreases in scenarios when evolving LLM-generated disinformation is encountered in sequential order. To address this problem, we propose DELD (Detecting Evolving LLM-generated Disinformation), a parameter-efficient approach that jointly leverages the general fact-checking capabilities of pre-trained language models (PLM) and the independent disinformation generation characteristics of various LLMs. In particular, the learned characteristics are concatenated sequentially to facilitate knowledge accumulation and transformation. DELD addresses the issue of label scarcity by integrating the semantic embeddings of disinformation with trainable soft prompts to elicit model-specific knowledge. Our experiments show that \textit{DELD} significantly outperforms state-of-the-art methods. Moreover, our method provides critical insights into the unique patterns of disinformation generation across different LLMs, offering valuable perspectives in this line of research.

Paper Structure

This paper contains 22 sections, 11 equations, 5 figures, 4 tables.

Figures (5)

  • Figure 1: An overview of online disinformation generation and detection pathway over time. During the post-LLM era, we mainly focused on detecting human-written disinformation. After the launch of ChatGPT, the landscape of disinformation generation and detection has changed. Disinformation generated by machines is evolving through the rapid development of advanced large language models. In this study, we focus on the novel research problem of detecting evolving LLM-generated disinformation (right-bottom square).
  • Figure 2: An overview of the proposed method DELD, The left panel of the figure shows the training pipeline for trainable prompts. We train an individual soft prompt for each disinformation generator. Next, we concatenate them together to facilitate knowledge accumulation and transformation. On the right panel, which show after concatenating each prompt, we only finetune the classifier while keeping the learned prompts and the PLM frozen.
  • Figure 3: The prompt template for LLaMA and ChatGPT.
  • Figure 4: Comparison of model forgetting across different datasets.
  • Figure 5: Illustration of characteristics of four disinformation generators using word cloud.