Table of Contents
Fetching ...

On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

Deng Cai, Huayang Li, Tingchen Fu, Siheng Li, Weiwen Xu, Shuaiyi Li, Bowen Cao, Zhisong Zhang, Xinting Huang, Leyang Cui, Yan Wang, Lemao Liu, Taro Watanabe, Shuming Shi

TL;DR

This paper demonstrates the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting, which establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications.

Abstract

Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications. In this paper, we demonstrate the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting. This interchangeability establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications. Our work offers a holistic view that unifies numerous existing studies and suggests potential research directions. We envision our work as a useful roadmap for future research on LLMs.

On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

TL;DR

This paper demonstrates the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting, which establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications.

Abstract

Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications. In this paper, we demonstrate the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting. This interchangeability establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications. Our work offers a holistic view that unifies numerous existing studies and suggests potential research directions. We envision our work as a useful roadmap for future research on LLMs.

Paper Structure

This paper contains 61 sections, 16 equations, 1 figure, 2 tables.

Figures (1)

  • Figure 1: The six transformations and their applications discussed in this paper.