Composition Vision-Language Understanding via Segment and Depth Anything Model
Mingxiao Huo, Pengliang Ji, Haotian Lin, Junchen Liu, Yixiao Wang, Yijun Chen
TL;DR
The paper tackles the challenge of deepening vision-language understanding in real-world scenes by bridging segmentation, depth estimation, and language reasoning. It introduces a modular pipeline that fuses the Segment Anything Model (SAM) for instance-level segmentation, the Depth Anything Model (DAM) for per-instance depth, and GPT-4V for language-based reasoning, forming a symbolic knowledge layer for zero-shot tasks. Key contributions include: (i) symbolic knowledge gathering that combines neural cues with depth to produce intrinsic/extrinsic instance information, (ii) a discriminative, pairwise composition reasoning module over 8 spatial relations and 6 interactions, and (iii) a Visual Language Answering boosting strategy that leverages a two-branch architecture to improve zero-shot VQA. The approach demonstrates improved multimodal understanding on in-the-wild images and enables dataset generation opportunities, with practical impact on robust visual reasoning for downstream applications and future research in open-set, multimodal perception. $8$ fundamental spatial relations and $6$ complex interactions are explicitly modeled to capture rich scene composition, advancing zero-shot compositionality in vision-language systems.
Abstract
We introduce a pioneering unified library that leverages depth anything, segment anything models to augment neural comprehension in language-vision model zero-shot understanding. This library synergizes the capabilities of the Depth Anything Model (DAM), Segment Anything Model (SAM), and GPT-4V, enhancing multimodal tasks such as vision-question-answering (VQA) and composition reasoning. Through the fusion of segmentation and depth analysis at the symbolic instance level, our library provides nuanced inputs for language models, significantly advancing image interpretation. Validated across a spectrum of in-the-wild real-world images, our findings showcase progress in vision-language models through neural-symbolic integration. This novel approach melds visual and language analysis in an unprecedented manner. Overall, our library opens new directions for future research aimed at decoding the complexities of the real world through advanced multimodal technologies and our code is available at \url{https://github.com/AnthonyHuo/SAM-DAM-for-Compositional-Reasoning}.
