CausalScore: An Automatic Reference-Free Metric for Assessing Response Relevance in Open-Domain Dialogue Systems
Tao Feng, Lizhen Qu, Xiaoxi Kang, Gholamreza Haffari
TL;DR
CausalScore introduces a reference-free metric for open-domain dialogue evaluation by quantifying causal strength between dialogue history and responses through classifier-based unconditional and conditional dependence tests. Trained on an extended CGDIALOG+ dataset (with DREAM data), the method learns to detect historically grounded relations and aggregates them into a final score that correlates more strongly with human judgments than existing metrics. The approach is validated across multiple datasets and dialogue models, with extensive ablations showing the importance of conditional dependence and annotated causal relations, and the release of CGDIALOG+ supports future research. Overall, CausalScore offers a scalable, human-aligned alternative to reference-based metrics, with potential for broader domain applicability and improved evaluation efficiency in dialogue systems.
Abstract
Automatically evaluating the quality of responses in open-domain dialogue systems is a challenging but crucial task. Current evaluation metrics often fail to align with human judgments, especially when assessing responses that are grammatically correct. To address this issue, we propose a novel metric, called CausalScore, which assesses the relevance of responses by measuring the causal strength between dialogue histories and responses. The causal strength is estimated by utilizing both unconditional dependence and conditional dependencies from the dialogue history to responses. We compare our metric with the existing competitive metrics in terms of their alignment with human judgements. Our experimental results demonstrate that CausalScore significantly surpasses existing state-of-the-art metrics by aligning better with human judgements. Additionally, we collect a new dialogue dataset CGDIALOG+ with human-annotated causal relations and a set of pairwise human judgements to facilitate the development of future automatic metrics.
