Table of Contents
Fetching ...

SHOE: Semantic HOI Open-Vocabulary Evaluation Metric

Maja Noack, Qinqian Lei, Taipeng Tian, Bihan Dong, Robby T. Tan, Yixin Chen, John Young, Saijun Zhang, Bo Wang

Abstract

Open-vocabulary human-object interaction (HOI) detection is a step towards building scalable systems that generalize to unseen interactions in real-world scenarios and support grounded multimodal systems that reason about human-object relationships. However, standard evaluation metrics, such as mean Average Precision (mAP), treat HOI classes as discrete categorical labels and fail to credit semantically valid but lexically different predictions (e.g., "lean on couch" vs. "sit on couch"), limiting their applicability for evaluating open-vocabulary predictions that go beyond any predefined set of HOI labels. We introduce SHOE (Semantic HOI Open-Vocabulary Evaluation), a new evaluation framework that incorporates semantic similarity between predicted and ground-truth HOI labels. SHOE decomposes each HOI prediction into its verb and object components, estimates their semantic similarity using the average of multiple large language models (LLMs), and combines them into a similarity score to evaluate alignment beyond exact string match. This enables a flexible and scalable evaluation of both existing HOI detection methods and open-ended generative models using standard benchmarks such as HICO-DET. Experimental results show that SHOE scores align more closely with human judgments than existing metrics, including LLM-based and embedding-based baselines, achieving an agreement of 85.73% with the average human ratings. Our work underscores the need for semantically grounded HOI evaluation that better mirrors human understanding of interactions. We will release our evaluation metric to the public to facilitate future research.

SHOE: Semantic HOI Open-Vocabulary Evaluation Metric

Abstract

Open-vocabulary human-object interaction (HOI) detection is a step towards building scalable systems that generalize to unseen interactions in real-world scenarios and support grounded multimodal systems that reason about human-object relationships. However, standard evaluation metrics, such as mean Average Precision (mAP), treat HOI classes as discrete categorical labels and fail to credit semantically valid but lexically different predictions (e.g., "lean on couch" vs. "sit on couch"), limiting their applicability for evaluating open-vocabulary predictions that go beyond any predefined set of HOI labels. We introduce SHOE (Semantic HOI Open-Vocabulary Evaluation), a new evaluation framework that incorporates semantic similarity between predicted and ground-truth HOI labels. SHOE decomposes each HOI prediction into its verb and object components, estimates their semantic similarity using the average of multiple large language models (LLMs), and combines them into a similarity score to evaluate alignment beyond exact string match. This enables a flexible and scalable evaluation of both existing HOI detection methods and open-ended generative models using standard benchmarks such as HICO-DET. Experimental results show that SHOE scores align more closely with human judgments than existing metrics, including LLM-based and embedding-based baselines, achieving an agreement of 85.73% with the average human ratings. Our work underscores the need for semantically grounded HOI evaluation that better mirrors human understanding of interactions. We will release our evaluation metric to the public to facilitate future research.

Paper Structure

This paper contains 33 sections, 18 equations, 12 figures, 6 tables.

Figures (12)

  • Figure 1: (a) Standard mAP metric considers all mismatches as false positives, even when predictions are semantically similar to ground truths. (b) Our SHOE metric assigns soft credit to such cases based on similarity scores, resulting in partial true positives.
  • Figure 1: Example of the user study interface. Annotators are shown two HOI interactions along with their WordNet glosses and asked to rate their semantic similarity on a 5-point scale.
  • Figure 2: SHOE Framework Overview. SHOE evaluates closed and open-vocabulary HOI predictions splitting by verb and object mapping to WordNet synsets and computing LLM agreement based pairwise similarity with ground-truth interactions. Predictions get matched to the ground truth based on the highest pair similarity. SHOE mAP is calculated if confidence score is available.
  • Figure 2: Per-annotator distribution of HOI similarity scores ranging from 0 (no similarity) to 4 (high similarity), showing individual rating tendencies across the annotation set.
  • Figure 3: Pearson correlation between LLMs for verb (upper triangle) and object (lower triangle) similarity ratings.
  • ...and 7 more figures