Table of Contents
Fetching ...

MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?

Xirui Li, Hengguang Zhou, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Cho-Jui Hsieh

TL;DR

The paper examines oversensitivity in Multimodal Large Language Models, where benign queries are rejected due to certain visual cues. It introduces MOSSBench, a 300-sample benchmark generated via a hybrid LLM-human workflow to probe three stimulus types: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. An empirical study across 20 MLLMs shows pervasive oversensitivity, with safer models and web interfaces often exhibiting stronger refusals, and system prompts can significantly modulate this behavior. The work highlights a need for nuanced safety alignment that preserves helpfulness while avoiding unnecessary refusals, guiding future refinements in MLLMs’ safety mechanisms and evaluation practices.

Abstract

Humans are prone to cognitive distortions -- biased thinking patterns that lead to exaggerated responses to specific stimuli, albeit in very different contexts. This paper demonstrates that advanced Multimodal Large Language Models (MLLMs) exhibit similar tendencies. While these models are designed to respond queries under safety mechanism, they sometimes reject harmless queries in the presence of certain visual stimuli, disregarding the benign nature of their contexts. As the initial step in investigating this behavior, we identify three types of stimuli that trigger the oversensitivity of existing MLLMs: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To systematically evaluate MLLMs' oversensitivity to these stimuli, we propose the Multimodal OverSenSitivity Benchmark (MOSSBench). This toolkit consists of 300 manually collected benign multimodal queries, cross-verified by third-party reviewers (AMT). Empirical studies using MOSSBench on 20 MLLMs reveal several insights: (1). Oversensitivity is prevalent among SOTA MLLMs, with refusal rates reaching up to 76% for harmless queries. (2). Safer models are more oversensitive: increasing safety may inadvertently raise caution and conservatism in the model's responses. (3). Different types of stimuli tend to cause errors at specific stages -- perception, intent reasoning, and safety judgement -- in the response process of MLLMs. These findings highlight the need for refined safety mechanisms that balance caution with contextually appropriate responses, improving the reliability of MLLMs in real-world applications. We make our project available at https://turningpoint-ai.github.io/MOSSBench/.

MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?

TL;DR

The paper examines oversensitivity in Multimodal Large Language Models, where benign queries are rejected due to certain visual cues. It introduces MOSSBench, a 300-sample benchmark generated via a hybrid LLM-human workflow to probe three stimulus types: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. An empirical study across 20 MLLMs shows pervasive oversensitivity, with safer models and web interfaces often exhibiting stronger refusals, and system prompts can significantly modulate this behavior. The work highlights a need for nuanced safety alignment that preserves helpfulness while avoiding unnecessary refusals, guiding future refinements in MLLMs’ safety mechanisms and evaluation practices.

Abstract

Humans are prone to cognitive distortions -- biased thinking patterns that lead to exaggerated responses to specific stimuli, albeit in very different contexts. This paper demonstrates that advanced Multimodal Large Language Models (MLLMs) exhibit similar tendencies. While these models are designed to respond queries under safety mechanism, they sometimes reject harmless queries in the presence of certain visual stimuli, disregarding the benign nature of their contexts. As the initial step in investigating this behavior, we identify three types of stimuli that trigger the oversensitivity of existing MLLMs: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To systematically evaluate MLLMs' oversensitivity to these stimuli, we propose the Multimodal OverSenSitivity Benchmark (MOSSBench). This toolkit consists of 300 manually collected benign multimodal queries, cross-verified by third-party reviewers (AMT). Empirical studies using MOSSBench on 20 MLLMs reveal several insights: (1). Oversensitivity is prevalent among SOTA MLLMs, with refusal rates reaching up to 76% for harmless queries. (2). Safer models are more oversensitive: increasing safety may inadvertently raise caution and conservatism in the model's responses. (3). Different types of stimuli tend to cause errors at specific stages -- perception, intent reasoning, and safety judgement -- in the response process of MLLMs. These findings highlight the need for refined safety mechanisms that balance caution with contextually appropriate responses, improving the reliability of MLLMs in real-world applications. We make our project available at https://turningpoint-ai.github.io/MOSSBench/.

Paper Structure

This paper contains 53 sections, 41 figures, 6 tables.

Figures (41)

  • Figure 1: Overview of MOSSBench. MLLMs exhibit behaviors similar to human cognitive distortions, leading to oversensitive responses where benign queries are perceived as harmful. We discover that oversensitivity prevails among existing MLLMs.
  • Figure 1: Key statistics of MOSSBench. Mossbench consists of 300 samples with diverse oversensitivity stimuli and relevance to daily applications.
  • Figure 2: Examples of three visual stimuli of oversensitivity. (Left) An example of Exaggerated Risk. The presence of a seemly harmful dinosaur sculpture is irrelevant to the request but could trigger refusal. (Middle) An example of Negated Harm. Explicit negation of harm could be overlooked or misinterpreted by models. (Right) An example of Counterintuitive Interpretation. The model has a propensity to assume unlikely harmful interpretation (put the girl in cage) without considering more reasonable interpretation (put the parrot in cage).
  • Figure 3: Distribution of MOSSBench based on the harm protocol from Harmbench mazeika2024harmbench, showing potential misinterpretation of MLLMs and associated harm types.
  • Figure 4: Oversensitivity level versus Safety level of MLLMs. The levels are decided by their refusal rate of samples. The higher models refuse harmful samples, the higher their safety levels are. The open-source models are marked in red, while the proprietary models are marked in blue.
  • ...and 36 more figures