Table of Contents
Fetching ...

A Revealed Preference Framework for AI Alignment

Elchin Suleymanov

Abstract

Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.

A Revealed Preference Framework for AI Alignment

Abstract

Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.

Paper Structure

This paper contains 8 sections, 9 theorems, 60 equations, 3 tables.

Key Result

Proposition 1

Let $\rho^{AI}$ be consistent with LAM. Then $\rho^{AI}$ satisfies IIA if and only if $\alpha \in \{0, 1\}$ or $v = \lambda u$ for some $\lambda > 0$.

Theorems & Definitions (22)

  • Definition 1: Luce Alignment Model
  • Proposition 1: IIA Violation
  • proof
  • Definition 2: Instability Measures
  • Remark 1
  • Proposition 2: Identification of $\alpha$
  • proof
  • Corollary 1
  • Theorem 1: Laboratory Identification
  • proof
  • ...and 12 more