Table of Contents
Fetching ...

Alignment For Performance Improvement in Conversation Bots

Raghav Garg, Kapil Sharma, Shrey Singla

TL;DR

It is shown that alignment methods can achieve superior adherence to guardrails compared to instruction fine-tuning alone in conversational agents within predefined guidelines or 'guardrails'.

Abstract

This paper shows that alignment methods can achieve superior adherence to guardrails compared to instruction fine-tuning alone in conversational agents, also known as bots, within predefined guidelines or 'guardrails'. It examines traditional training approaches such as instruction fine-tuning and the recent advancements in direct alignment methods like Identity Preference Optimization (IPO), and Kahneman-Tversky Optimization (KTO). The effectiveness of alignment techniques both pre and post-instruction tuning is highlighted, illustrating their potential to optimize conversational bots in domains that require strict adherence to specified rules, such as customer care.

Alignment For Performance Improvement in Conversation Bots

TL;DR

It is shown that alignment methods can achieve superior adherence to guardrails compared to instruction fine-tuning alone in conversational agents within predefined guidelines or 'guardrails'.

Abstract

This paper shows that alignment methods can achieve superior adherence to guardrails compared to instruction fine-tuning alone in conversational agents, also known as bots, within predefined guidelines or 'guardrails'. It examines traditional training approaches such as instruction fine-tuning and the recent advancements in direct alignment methods like Identity Preference Optimization (IPO), and Kahneman-Tversky Optimization (KTO). The effectiveness of alignment techniques both pre and post-instruction tuning is highlighted, illustrating their potential to optimize conversational bots in domains that require strict adherence to specified rules, such as customer care.

Paper Structure

This paper contains 20 sections, 4 equations, 4 figures, 1 table, 1 algorithm.

Figures (4)

  • Figure 1: (a) shows a sample of the guardrails obtained after Stage 1 of data annotation process. Further after Stage 2 and sub sampling at agent turns, we also obtain prompts as in figure (b),a chosen response as in figure (c) and a rejected response as shown in figure (d)
  • Figure 2: Different Experiment Flows:(a) and (b) refer to Flow 1 and (c) referes to Flow 2
  • Figure 3: Win Rates Of Different Model in Experiment Flow 2.(a) signifies the performance gain obtained from doing alignment after the SFT stage whereas (b) shows that IPO performed better in our experiments.
  • Figure 4: Win Rate Of Different Model in Experiment Flow 1. (a) and (d) show similar trends as observed in Flow 1. (b) and (c) additionally show that Alignment works superior to SFT.