Table of Contents
Fetching ...

CausalScore: An Automatic Reference-Free Metric for Assessing Response Relevance in Open-Domain Dialogue Systems

Tao Feng, Lizhen Qu, Xiaoxi Kang, Gholamreza Haffari

TL;DR

CausalScore introduces a reference-free metric for open-domain dialogue evaluation by quantifying causal strength between dialogue history and responses through classifier-based unconditional and conditional dependence tests. Trained on an extended CGDIALOG+ dataset (with DREAM data), the method learns to detect historically grounded relations and aggregates them into a final score that correlates more strongly with human judgments than existing metrics. The approach is validated across multiple datasets and dialogue models, with extensive ablations showing the importance of conditional dependence and annotated causal relations, and the release of CGDIALOG+ supports future research. Overall, CausalScore offers a scalable, human-aligned alternative to reference-based metrics, with potential for broader domain applicability and improved evaluation efficiency in dialogue systems.

Abstract

Automatically evaluating the quality of responses in open-domain dialogue systems is a challenging but crucial task. Current evaluation metrics often fail to align with human judgments, especially when assessing responses that are grammatically correct. To address this issue, we propose a novel metric, called CausalScore, which assesses the relevance of responses by measuring the causal strength between dialogue histories and responses. The causal strength is estimated by utilizing both unconditional dependence and conditional dependencies from the dialogue history to responses. We compare our metric with the existing competitive metrics in terms of their alignment with human judgements. Our experimental results demonstrate that CausalScore significantly surpasses existing state-of-the-art metrics by aligning better with human judgements. Additionally, we collect a new dialogue dataset CGDIALOG+ with human-annotated causal relations and a set of pairwise human judgements to facilitate the development of future automatic metrics.

CausalScore: An Automatic Reference-Free Metric for Assessing Response Relevance in Open-Domain Dialogue Systems

TL;DR

CausalScore introduces a reference-free metric for open-domain dialogue evaluation by quantifying causal strength between dialogue history and responses through classifier-based unconditional and conditional dependence tests. Trained on an extended CGDIALOG+ dataset (with DREAM data), the method learns to detect historically grounded relations and aggregates them into a final score that correlates more strongly with human judgments than existing metrics. The approach is validated across multiple datasets and dialogue models, with extensive ablations showing the importance of conditional dependence and annotated causal relations, and the release of CGDIALOG+ supports future research. Overall, CausalScore offers a scalable, human-aligned alternative to reference-based metrics, with potential for broader domain applicability and improved evaluation efficiency in dialogue systems.

Abstract

Automatically evaluating the quality of responses in open-domain dialogue systems is a challenging but crucial task. Current evaluation metrics often fail to align with human judgments, especially when assessing responses that are grammatically correct. To address this issue, we propose a novel metric, called CausalScore, which assesses the relevance of responses by measuring the causal strength between dialogue histories and responses. The causal strength is estimated by utilizing both unconditional dependence and conditional dependencies from the dialogue history to responses. We compare our metric with the existing competitive metrics in terms of their alignment with human judgements. Our experimental results demonstrate that CausalScore significantly surpasses existing state-of-the-art metrics by aligning better with human judgements. Additionally, we collect a new dialogue dataset CGDIALOG+ with human-annotated causal relations and a set of pairwise human judgements to facilitate the development of future automatic metrics.

Paper Structure

This paper contains 39 sections, 2 equations, 4 figures, 12 tables.

Figures (4)

  • Figure 1: This is an illustrative example of dialogue evaluation, where the responses are generated by human and different dialogue systems. Evaluation results for relevance using different metrics are provided alongside the responses. Highlighted texts indicate causes of human response.
  • Figure 2: Annotation instruction of CGDIALOG+.
  • Figure 3: CGDIALOG+ annotation interface.
  • Figure 4: Distribution of CausalScore on three datasets with a kernel density estimate to smooth the distribution.