Table of Contents
Fetching ...

Vision-Language Models vs Human: Perceptual Image Quality Assessment

Imran Mehmood, Imad Ali Shah, Ming Ronnier Luo, Brian Deegan

Abstract

Psychophysical experiments remain the most reliable approach for perceptual image quality assessment (IQA), yet their cost and limited scalability encourage automated approaches. We investigate whether Vision Language Models (VLMs) can approximate human perceptual judgments across three image quality scales: contrast, colorfulness and overall preference. Six VLMs four proprietary and two openweight models are benchmarked against psychophysical data. This work presents a systematic benchmark of VLMs for perceptual IQA through comparison with human psychophysical data. The results reveal strong attribute dependent variability models with high human alignment for colorfulness (ρup to 0.93) underperform on contrast and vice-versa. Attribute weighting analysis further shows that most VLMs assign higher weights to colorfulness compared to contrast when evaluating overall preference similar to the psychophysical data. Intramodel consistency analysis reveals a counterintuitive tradeoff: the most self consistent models are not necessarily the most human aligned suggesting response variability reflects sensitivity to scene dependent perceptual cues. Furthermore, human-VLM agreement is increased with perceptual separability, indicating VLMs are more reliable when stimulus differences are clearly expressed.

Vision-Language Models vs Human: Perceptual Image Quality Assessment

Abstract

Psychophysical experiments remain the most reliable approach for perceptual image quality assessment (IQA), yet their cost and limited scalability encourage automated approaches. We investigate whether Vision Language Models (VLMs) can approximate human perceptual judgments across three image quality scales: contrast, colorfulness and overall preference. Six VLMs four proprietary and two openweight models are benchmarked against psychophysical data. This work presents a systematic benchmark of VLMs for perceptual IQA through comparison with human psychophysical data. The results reveal strong attribute dependent variability models with high human alignment for colorfulness (ρup to 0.93) underperform on contrast and vice-versa. Attribute weighting analysis further shows that most VLMs assign higher weights to colorfulness compared to contrast when evaluating overall preference similar to the psychophysical data. Intramodel consistency analysis reveals a counterintuitive tradeoff: the most self consistent models are not necessarily the most human aligned suggesting response variability reflects sensitivity to scene dependent perceptual cues. Furthermore, human-VLM agreement is increased with perceptual separability, indicating VLMs are more reliable when stimulus differences are clearly expressed.

Paper Structure

This paper contains 21 sections, 6 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Workflow for comparing perceptual IQA between human observers and VLMs. (a) Evaluation acquisition: Human psychophysical data are obtained through pairwise comparisons, while VLM assessments are collected via prompt-based image comparisons using an identical query. (b) Data processing: Responses from both sources undergo repeatability filtering, verification, and score standardization to produce a unified evaluation dataset. (c) Evaluation results: Standardized human and VLM scores are compared to quantify model–human alignment, revealing attribute-dependent performance and varying agreement with the psychophysical baseline.
  • Figure 2: Attribute weighting for overall preference. The x-axis represents the contrast weight ($\alpha$) and the y-axis represents the colorfulness weight ($\beta$).
  • Figure 3: Mean scene agreement (Spearman) as a function of scene difficulty for (a) colorfulness, (b) contrast, and (c) overall preference.