Table of Contents
Fetching ...

USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations

Mounika Marreddy, Subba Reddy Oota, Venkata Charan Chinni, Manish Gupta, Lucie Flek

TL;DR

USDC introduces a large-scale dataset of user stance and dogmatism in long, multi-user Reddit conversations, addressing the limitations of post-level annotations by focusing on user-level dynamics across entire threads. It leverages LLMs (GPT-4 and Mistral Large) in zero-/one-/few-shot settings to annotate stance and dogmatism, with majority voting to derive gold labels and human validation showing reasonable agreement (IAA ~0.49–0.57). The authors fine-tune and instruction-tune several small language models on USDC, achieving stronger stance performance with instruction-tuning (up to 56.2 weighted F1) and mixed results for dogmatism, and demonstrate transfer learning to SPINOS, MT-CDS, and Twitter stance datasets. The work provides a practical resource for moderation, dynamic user representation, and dialogue systems, while also analyzing annotation reliability, recency bias, and the importance of full-context long-form conversations for labeling opinions.

Abstract

Analyzing user opinion changes in long conversation threads is extremely critical for applications like enhanced personalization, market research, political campaigns, customer service, targeted advertising, and content moderation. Unfortunately, previous studies on stance and dogmatism in user conversations have focused on training models using datasets annotated at the post level, treating each post as independent and randomly sampling posts from conversation threads. Hence, first, we build a dataset for studying user opinion fluctuations in 764 long multi-user Reddit conversation threads, called USDC. USDC contains annotations for 2 tasks: i) User Stance classification, which involves labeling a user's stance in a post within a conversation on a five-point scale; ii) User Dogmatism classification, which involves labeling a user's overall opinion in the conversation on a four-point scale. Besides being time-consuming and costly, manual annotations for USDC are challenging because: 1) Conversation threads could be very long, increasing the chances of noisy annotations; and 2) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Hence, we leverage majority voting on zero-shot, one-shot, and few-shot annotations from Mistral Large and GPT-4 to automate the annotation process. Human annotations on 200 test conversations achieved inter-annotator agreement scores of 0.49 for stance and 0.50 for dogmatism with these LLM annotations, indicating a reasonable level of consistency between human and LLM annotations. USDC is then used to finetune and instruction-tune multiple deployable small language models like LLaMA, Falcon and Vicuna for the stance and dogmatism classification tasks. We make the code and dataset publicly available [https://github.com/mounikamarreddy/USDC].

USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations

TL;DR

USDC introduces a large-scale dataset of user stance and dogmatism in long, multi-user Reddit conversations, addressing the limitations of post-level annotations by focusing on user-level dynamics across entire threads. It leverages LLMs (GPT-4 and Mistral Large) in zero-/one-/few-shot settings to annotate stance and dogmatism, with majority voting to derive gold labels and human validation showing reasonable agreement (IAA ~0.49–0.57). The authors fine-tune and instruction-tune several small language models on USDC, achieving stronger stance performance with instruction-tuning (up to 56.2 weighted F1) and mixed results for dogmatism, and demonstrate transfer learning to SPINOS, MT-CDS, and Twitter stance datasets. The work provides a practical resource for moderation, dynamic user representation, and dialogue systems, while also analyzing annotation reliability, recency bias, and the importance of full-context long-form conversations for labeling opinions.

Abstract

Analyzing user opinion changes in long conversation threads is extremely critical for applications like enhanced personalization, market research, political campaigns, customer service, targeted advertising, and content moderation. Unfortunately, previous studies on stance and dogmatism in user conversations have focused on training models using datasets annotated at the post level, treating each post as independent and randomly sampling posts from conversation threads. Hence, first, we build a dataset for studying user opinion fluctuations in 764 long multi-user Reddit conversation threads, called USDC. USDC contains annotations for 2 tasks: i) User Stance classification, which involves labeling a user's stance in a post within a conversation on a five-point scale; ii) User Dogmatism classification, which involves labeling a user's overall opinion in the conversation on a four-point scale. Besides being time-consuming and costly, manual annotations for USDC are challenging because: 1) Conversation threads could be very long, increasing the chances of noisy annotations; and 2) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Hence, we leverage majority voting on zero-shot, one-shot, and few-shot annotations from Mistral Large and GPT-4 to automate the annotation process. Human annotations on 200 test conversations achieved inter-annotator agreement scores of 0.49 for stance and 0.50 for dogmatism with these LLM annotations, indicating a reasonable level of consistency between human and LLM annotations. USDC is then used to finetune and instruction-tune multiple deployable small language models like LLaMA, Falcon and Vicuna for the stance and dogmatism classification tasks. We make the code and dataset publicly available [https://github.com/mounikamarreddy/USDC].

Paper Structure

This paper contains 45 sections, 22 figures, 13 tables.

Figures (22)

  • Figure 1: Sample Reddit conversation on "Capitalism vs. Socialism" with Stance (for every comment $\{c_i\}_{i=1}^6$) and Dogmatism (for every author $\{a_j\}_{j=1}^3$) labels from Mistral Large and GPT-4. The submission content favors socialism and examines how the authors position their opinions regarding socialism vs. capitalism.
  • Figure 2: Generating annotations using LLMs: We pass the entire conversation for each Reddit thread as JSON. The JSON includes top two authors who posted most comments, alongside annotation guidelines for stance and dogmatism labels in system prompt.
  • Figure 3: Failure cases of LLMs: Mistral Large few-shot output (left), the ids ("f6mmzx1","f6mna88") were mismatched with generated ids ("f9mmzx1","f9mna88"), GPT-4 zero-shot output (right), the key "label" was mismatched with generated key "body".
  • Figure 4: Distribution of Stance labels across LLM annotations in six settings: GPT-4, Mistral Large$\times$Zero-shot, One-shot, Few-shot. Somewhat In Favor is the most frequent class across all six settings, while Strongly In Favor is the least frequent.
  • Figure 5: Distribution of dogmatism labels across LLM annotations in six settings: GPT-4, Mistral Large$\times$Zero-shot, One-shot, Few-shot. Open to Dialogue is the most frequent class across all six settings, while Flexible is the least frequent.
  • ...and 17 more figures