Towards a Formal Characterization of User Simulation Objectives in Conversational Information Access
Nolwenn Bernard, Krisztian Balog
TL;DR
The paper tackles the problem of defining objective criteria for user simulators in conversational information access, distinguishing training and evaluation uses. It introduces a formal framework with definitions of training and evaluation objectives and proposes similarity based metrics for assessing simulators. An empirical study across multiple conversational agents shows that optimizing for training objective does not guarantee improvements in evaluation objective, highlighting the need for use specific simulators. The work provides a foundation for designing simulators tailored to their intended purpose and offers directions for more nuanced evaluation metrics.
Abstract
User simulation is a promising approach for automatically training and evaluating conversational information access agents, enabling the generation of synthetic dialogues and facilitating reproducible experiments at scale. However, the objectives of user simulation for the different uses remain loosely defined, hindering the development of effective simulators. In this work, we formally characterize the distinct objectives for user simulators: training aims to maximize behavioral similarity to real users, while evaluation focuses on the accurate prediction of real-world conversational agent performance. Through an empirical study, we demonstrate that optimizing for one objective does not necessarily lead to improved performance on the other. This finding underscores the need for tailored design considerations depending on the intended use of the simulator. By establishing clear objectives and proposing concrete measures to evaluate user simulators against those objectives, we pave the way for the development of simulators that are specifically tailored to their intended use, ultimately leading to more effective conversational agents.
