Table of Contents
Fetching ...

Discovering Failure Modes in Vision-Language Models using RL

Kanishk Jain, Qian Yang, Shravan Nayak, Parisa Kordjamshidi, Nishanth Anand, Aishwarya Agrawal

Abstract

Vision-language Models (VLMs), despite achieving strong performance on multimodal benchmarks, often misinterpret straightforward visual concepts that humans identify effortlessly, such as counting, spatial reasoning, and viewpoint understanding. Previous studies manually identified these weaknesses and found that they often stem from deficits in specific skills. However, such manual efforts are costly, unscalable, and subject to human bias, which often overlooks subtle details in favor of salient objects, resulting in an incomplete understanding of a model's vulnerabilities. To address these limitations, we propose a Reinforcement Learning (RL)-based framework to automatically discover the failure modes or blind spots of any candidate VLM on a given data distribution without human intervention. Our framework trains a questioner agent that adaptively generates queries based on the candidate VLM's responses to elicit incorrect answers. Our approach increases question complexity by focusing on fine-grained visual details and distinct skill compositions as training progresses, consequently identifying 36 novel failure modes in which VLMs struggle. We demonstrate the broad applicability of our framework by showcasing its generalizability across various model combinations.

Discovering Failure Modes in Vision-Language Models using RL

Abstract

Vision-language Models (VLMs), despite achieving strong performance on multimodal benchmarks, often misinterpret straightforward visual concepts that humans identify effortlessly, such as counting, spatial reasoning, and viewpoint understanding. Previous studies manually identified these weaknesses and found that they often stem from deficits in specific skills. However, such manual efforts are costly, unscalable, and subject to human bias, which often overlooks subtle details in favor of salient objects, resulting in an incomplete understanding of a model's vulnerabilities. To address these limitations, we propose a Reinforcement Learning (RL)-based framework to automatically discover the failure modes or blind spots of any candidate VLM on a given data distribution without human intervention. Our framework trains a questioner agent that adaptively generates queries based on the candidate VLM's responses to elicit incorrect answers. Our approach increases question complexity by focusing on fine-grained visual details and distinct skill compositions as training progresses, consequently identifying 36 novel failure modes in which VLMs struggle. We demonstrate the broad applicability of our framework by showcasing its generalizability across various model combinations.

Paper Structure

This paper contains 18 sections, 5 equations, 7 figures, 5 tables.

Figures (7)

  • Figure 1: We highlight the superiority of our RL-based framework for question generation over baselines. Our method: (1) targets non-salient objects to challenge fine-grained visual understanding; (2) uncovers a unique distribution of skills unaddressed by static methods; and (3) generates compositionally complex queries that stress-test the fine-grained reasoning of state-of-the-art VLMs.
  • Figure 2: Our approach consists of three models: a Questioner which generates questions, an Answerer which generates answers, and a Verifier that provides the reward signal for training.
  • Figure 3: Failure taxonomy pipeline has four stages. (1) Identification of primitives, (2) Topic modelling, (3) Skill extraction, and (4) Meta skill identification.
  • Figure 4: Distribution of Shared and Exclusive skills per Method
  • Figure 5: Analysis of skills and question complexity across methods
  • ...and 2 more figures