Table of Contents
Fetching ...

AI Empathy Erodes Cognitive Autonomy in Younger Users

Junfeng Jiao, Abhejay Murali, Saleh Afroogh

Abstract

Affective alignment in generative AI represents a systemic risk to the developmental autonomy of younger users. Although emotional mirroring is commonly seen as a hallmark of advanced human-machine interaction, it can also manifest as affective sycophancy, reinforcing a user's immediate emotional state. By providing a sense of objectivity to transient anxieties, these systems diminish the cognitive friction necessary for independent emotional management and critical thought. Reward models driven by RLHF could heighten this dilemma by embedding adult-focused definitions of helpfulness, unintentionally promoting emotional dependency in younger users rather than facilitating cognitive reappraisal. This paper exposes the misalignment between adult-labeled reward signals and the developmental requirements of younger users, proposing stoic architectures that emphasize functional neutrality to preserve user autonomy.

AI Empathy Erodes Cognitive Autonomy in Younger Users

Abstract

Affective alignment in generative AI represents a systemic risk to the developmental autonomy of younger users. Although emotional mirroring is commonly seen as a hallmark of advanced human-machine interaction, it can also manifest as affective sycophancy, reinforcing a user's immediate emotional state. By providing a sense of objectivity to transient anxieties, these systems diminish the cognitive friction necessary for independent emotional management and critical thought. Reward models driven by RLHF could heighten this dilemma by embedding adult-focused definitions of helpfulness, unintentionally promoting emotional dependency in younger users rather than facilitating cognitive reappraisal. This paper exposes the misalignment between adult-labeled reward signals and the developmental requirements of younger users, proposing stoic architectures that emphasize functional neutrality to preserve user autonomy.

Paper Structure

This paper contains 25 sections, 1 equation, 2 figures.

Figures (2)

  • Figure 1: Data derived from gerlich2025ai ($N=666$). A strong negative correlation ($r=-0.68$) was observed between AI usage and critical thinking scores ($p < 0.001$)
  • Figure 2: Overview of the Stoic Architecture. The top panel illustrates the Inference Loop, where a Valence Classifier routes high-arousal inputs to a stoic system prompt. The bottom panel details the Training Objective, where the policy is optimized to maximize a developmentally grounded RLAIF score while simultaneously minimizing affective mirroring via a sycophancy penalty.