Table of Contents
Fetching ...

Beyond Benchmarks: How Users Evaluate AI Chat Assistants

Moiz Sadiq Awan, Muhammad Haris Noor, Muhammad Salman Munaf

Abstract

Automated benchmarks dominate the evaluation of large language models, yet no systematic study has compared user satisfaction, adoption motivations, and frustrations across competing platforms using a consistent instrument. We address this gap with a cross-platform survey of 388 active AI chat users, comparing satisfaction, adoption drivers, use case performance, and qualitative frustrations across seven major platforms: ChatGPT, Claude, Gemini, DeepSeek, Grok, Mistral, and Llama. Three broad findings emerge. First, the top three platforms (Claude, ChatGPT, and DeepSeek) receive statistically indistinguishable satisfaction ratings despite vast differences in funding, team size, and benchmark performance. Second, users treat these tools as interchangeable utilities rather than sticky ecosystems: over 80% use two or more platforms, and switching costs are negligible. Third, each platform attracts users for different reasons: ChatGPT for its interface, Claude for answer quality, DeepSeek through word-of-mouth, and Grok for its content policy, suggesting that specialization, not generalist dominance, sustains competition. Hallucination and content filtering remain the most common frustrations across all platforms. These findings offer an early empirical baseline for a market that benchmarks alone cannot characterize, and point toward competitive plurality rather than winner-take-all consolidation among engaged users.

Beyond Benchmarks: How Users Evaluate AI Chat Assistants

Abstract

Automated benchmarks dominate the evaluation of large language models, yet no systematic study has compared user satisfaction, adoption motivations, and frustrations across competing platforms using a consistent instrument. We address this gap with a cross-platform survey of 388 active AI chat users, comparing satisfaction, adoption drivers, use case performance, and qualitative frustrations across seven major platforms: ChatGPT, Claude, Gemini, DeepSeek, Grok, Mistral, and Llama. Three broad findings emerge. First, the top three platforms (Claude, ChatGPT, and DeepSeek) receive statistically indistinguishable satisfaction ratings despite vast differences in funding, team size, and benchmark performance. Second, users treat these tools as interchangeable utilities rather than sticky ecosystems: over 80% use two or more platforms, and switching costs are negligible. Third, each platform attracts users for different reasons: ChatGPT for its interface, Claude for answer quality, DeepSeek through word-of-mouth, and Grok for its content policy, suggesting that specialization, not generalist dominance, sustains competition. Hallucination and content filtering remain the most common frustrations across all platforms. These findings offer an early empirical baseline for a market that benchmarks alone cannot characterize, and point toward competitive plurality rather than winner-take-all consolidation among engaged users.

Paper Structure

This paper contains 27 sections, 15 figures, 5 tables.

Figures (15)

  • Figure 1: Self-reported AI model usage in the past six months ($N=237$). Percentages reflect the share of all respondents reporting use.
  • Figure 2: Distribution of the number of AI chat platforms used per respondent ($n=170$). The mean is 2.83 and the median is 3.
  • Figure 3: Source of platform switchers, showing the dominance of ChatGPT as the origin platform.
  • Figure 4: Mean satisfaction by platform with 95% confidence intervals. The dotted line indicates the neutral midpoint (3.0).
  • Figure 5: Full satisfaction distribution by platform. The stacked bars show the percentage of respondents at each satisfaction level.
  • ...and 10 more figures