Table of Contents
Fetching ...

Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data

Yiting Ran, Xintao Wang, Rui Xu, Xinfeng Yuan, Jiaqing Liang, Deqing Yang, Yanghua Xiao

TL;DR

This paper proposes to enhance RPLMs via personality-indicative data and leverages questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.

Abstract

Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia.While existing RPAs well portray the characters' knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations. Code and data are available at \href{https://github.com/alienet1109/RolePersonality}{this URL}.

Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data

TL;DR

This paper proposes to enhance RPLMs via personality-indicative data and leverages questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.

Abstract

Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia.While existing RPAs well portray the characters' knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations. Code and data are available at \href{https://github.com/alienet1109/RolePersonality}{this URL}.

Paper Structure

This paper contains 46 sections, 1 figure, 8 tables.

Figures (1)

  • Figure 1: The framework of building and utilizing RolePersonality. First, we obtain RolePersonality by distilling advanced RPAs using scale questions. Then, we train RPLMs on RolePersonality to enhance their ability to capture characters' minds.