USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations
Mounika Marreddy, Subba Reddy Oota, Venkata Charan Chinni, Manish Gupta, Lucie Flek
TL;DR
USDC introduces a large-scale dataset of user stance and dogmatism in long, multi-user Reddit conversations, addressing the limitations of post-level annotations by focusing on user-level dynamics across entire threads. It leverages LLMs (GPT-4 and Mistral Large) in zero-/one-/few-shot settings to annotate stance and dogmatism, with majority voting to derive gold labels and human validation showing reasonable agreement (IAA ~0.49–0.57). The authors fine-tune and instruction-tune several small language models on USDC, achieving stronger stance performance with instruction-tuning (up to 56.2 weighted F1) and mixed results for dogmatism, and demonstrate transfer learning to SPINOS, MT-CDS, and Twitter stance datasets. The work provides a practical resource for moderation, dynamic user representation, and dialogue systems, while also analyzing annotation reliability, recency bias, and the importance of full-context long-form conversations for labeling opinions.
Abstract
Analyzing user opinion changes in long conversation threads is extremely critical for applications like enhanced personalization, market research, political campaigns, customer service, targeted advertising, and content moderation. Unfortunately, previous studies on stance and dogmatism in user conversations have focused on training models using datasets annotated at the post level, treating each post as independent and randomly sampling posts from conversation threads. Hence, first, we build a dataset for studying user opinion fluctuations in 764 long multi-user Reddit conversation threads, called USDC. USDC contains annotations for 2 tasks: i) User Stance classification, which involves labeling a user's stance in a post within a conversation on a five-point scale; ii) User Dogmatism classification, which involves labeling a user's overall opinion in the conversation on a four-point scale. Besides being time-consuming and costly, manual annotations for USDC are challenging because: 1) Conversation threads could be very long, increasing the chances of noisy annotations; and 2) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Hence, we leverage majority voting on zero-shot, one-shot, and few-shot annotations from Mistral Large and GPT-4 to automate the annotation process. Human annotations on 200 test conversations achieved inter-annotator agreement scores of 0.49 for stance and 0.50 for dogmatism with these LLM annotations, indicating a reasonable level of consistency between human and LLM annotations. USDC is then used to finetune and instruction-tune multiple deployable small language models like LLaMA, Falcon and Vicuna for the stance and dogmatism classification tasks. We make the code and dataset publicly available [https://github.com/mounikamarreddy/USDC].
