Table of Contents
Fetching ...

Leveraging LLMs for Dialogue Quality Measurement

Jinghan Jia, Abi Komma, Timothy Leffel, Xujun Peng, Ajay Nagesh, Tamer Soliman, Aram Galstyan, Anoop Kumar

TL;DR

This paper investigates using large language models (LLMs) to automatically evaluate dialogue quality by examining how model size, instruction-tuning, in-context exemplars, and chain-of-thought prompting affect alignment with human judgments. It proposes two evaluation pipelines—logits-based scoring and generation-based ratings—and validates them on public and internal datasets, employing algorithmic in-context sample selection and LoRA-based supervised fine-tuning. Key findings show that larger models improve zero-shot performance, instruction-tuning boosts zero-shot results, and both in-context-example selection and supervised fine-tuning substantially enhance correlation with human annotations; a CoT-based Analysis-first approach yields the best consistency between analysis, scores, and explanations. Overall, the work demonstrates that suitably fine-tuned, reasoning-capable LLMs can effectively support automated dialogue evaluation, with implications for scalable, data-efficient assessment in dialogue systems.

Abstract

In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zeroshot and few-shot capabilities across NLP tasks. This paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine "chain-of-thought" (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. Our results indicate that LLMs that are suitably fine-tuned and have sufficient reasoning capabilities can be leveraged for automated dialogue evaluation.

Leveraging LLMs for Dialogue Quality Measurement

TL;DR

This paper investigates using large language models (LLMs) to automatically evaluate dialogue quality by examining how model size, instruction-tuning, in-context exemplars, and chain-of-thought prompting affect alignment with human judgments. It proposes two evaluation pipelines—logits-based scoring and generation-based ratings—and validates them on public and internal datasets, employing algorithmic in-context sample selection and LoRA-based supervised fine-tuning. Key findings show that larger models improve zero-shot performance, instruction-tuning boosts zero-shot results, and both in-context-example selection and supervised fine-tuning substantially enhance correlation with human annotations; a CoT-based Analysis-first approach yields the best consistency between analysis, scores, and explanations. Overall, the work demonstrates that suitably fine-tuned, reasoning-capable LLMs can effectively support automated dialogue evaluation, with implications for scalable, data-efficient assessment in dialogue systems.

Abstract

In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zeroshot and few-shot capabilities across NLP tasks. This paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine "chain-of-thought" (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. Our results indicate that LLMs that are suitably fine-tuned and have sufficient reasoning capabilities can be leveraged for automated dialogue evaluation.

Paper Structure

This paper contains 16 sections, 3 equations, 2 figures, 4 tables.

Figures (2)

  • Figure 1: Schematic overview of LLM dialogue evaluation methods. Left: Pipeline using logits method for generating scores from LLMs. Right: Pipeline employing generation method to produce ratings from LLMs.
  • Figure 2: Score distribution in train and test splits from the Amazon-internal dataset.