Table of Contents
Fetching ...

TALENT: Target-aware Efficient Tuning for Referring Image Segmentation

Shuo Jin, Siyue Yu, Bingfeng Zhang, Chao Yao, Meiqin Liu, Jimin Xiao

Abstract

Referring image segmentation aims to segment specific targets based on a natural text expression. Recently, parameter-efficient tuning (PET) has emerged as a promising paradigm. However, existing PET-based methods often suffer from the fact that visual features can't emphasize the text-referred target instance but activate co-category yet unrelated objects. We analyze and quantify this problem, terming it the `non-target activation' (NTA) issue. To address this, we propose a novel framework, TALENT, which utilizes target-aware efficient tuning for PET-based RIS. Specifically, we first propose a Rectified Cost Aggregator (RCA) to efficiently aggregate text-referred features. Then, to calibrate `NTA' into accurate target activation, we adopt a Target-aware Learning Mechanism (TLM), including contextual pairwise consistency learning and target-centric contrastive learning. The former uses the sentence-level text feature to achieve a holistic understanding of the referent and constructs a text-referred affinity map to optimize the semantic association of visual features. The latter further enhances target localization to discover the distinct instance while suppressing associations with other unrelated ones. The two objectives work in concert and address `NTA' effectively. Extensive evaluations show that TALENT outperforms existing methods across various metrics (e.g., 2.5\% mIoU gains on G-Ref val set). Our codes will be released at: https://github.com/Kimsure/TALENT.

TALENT: Target-aware Efficient Tuning for Referring Image Segmentation

Abstract

Referring image segmentation aims to segment specific targets based on a natural text expression. Recently, parameter-efficient tuning (PET) has emerged as a promising paradigm. However, existing PET-based methods often suffer from the fact that visual features can't emphasize the text-referred target instance but activate co-category yet unrelated objects. We analyze and quantify this problem, terming it the `non-target activation' (NTA) issue. To address this, we propose a novel framework, TALENT, which utilizes target-aware efficient tuning for PET-based RIS. Specifically, we first propose a Rectified Cost Aggregator (RCA) to efficiently aggregate text-referred features. Then, to calibrate `NTA' into accurate target activation, we adopt a Target-aware Learning Mechanism (TLM), including contextual pairwise consistency learning and target-centric contrastive learning. The former uses the sentence-level text feature to achieve a holistic understanding of the referent and constructs a text-referred affinity map to optimize the semantic association of visual features. The latter further enhances target localization to discover the distinct instance while suppressing associations with other unrelated ones. The two objectives work in concert and address `NTA' effectively. Extensive evaluations show that TALENT outperforms existing methods across various metrics (e.g., 2.5\% mIoU gains on G-Ref val set). Our codes will be released at: https://github.com/Kimsure/TALENT.

Paper Structure

This paper contains 23 sections, 11 equations, 8 figures, 7 tables.

Figures (8)

  • Figure 1: Visual feature activation and segmentation maps. (a) Text descriptions. (b) Visual-text fusion in DETRIS huang2025densely, which activates co-category foreground objects. (c) Our TALENT, which emphasizes the text-referred target instance. (d) Segmentation result of the ground truth (GT). (e)-(f) Corresponding segmentation results of DETRIS huang2025densely and our TALENT.
  • Figure 2: Comparison of 'NTA' influence quantization among various methods on the selected subset of different benchmarks.
  • Figure 3: Framework pipeline of our TALENT. It contains four main modules: a frozen backbone building upon DINOv2-Reg and CLIP to encode the image and text, a rectified cost aggregator for vision-language interaction, a target-aware learning mechanism to strengthen feature representation, and a transformer decoder for final segmentation.
  • Figure 4: Qualitative comparison. We compare our TALENT with ETRIS xu2023bridging, ETOG yu2024etog, and DETRIS huang2025densely. It's observed that TALENT can accurately localize the target and generate more precise segmentation results.
  • Figure 5: Visualization of feature activation maps. We compare TALENT with the existing SOTA method DETRIS huang2025densely.
  • ...and 3 more figures