Table of Contents
Fetching ...

PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders

Niccolò Cavagnero, Narges Norouzi, Gijs Dubbelman, Daan de Geus

Abstract

Vision Foundation Models (VFMs) pre-trained at scale enable a single frozen encoder to serve multiple downstream tasks simultaneously. Recent VFM-based encoder-only models for image and video segmentation, such as EoMT and VidEoMT, achieve competitive accuracy with remarkably low latency, yet they require finetuning the encoder, sacrificing the multi-task encoder sharing that makes VFMs practically attractive for large-scale deployment. To reconcile encoder-only simplicity and speed with frozen VFM features, we propose the Plain Mask Decoder (PMD), a fast Transformer-based segmentation decoder that operates on top of frozen VFM features. The resulting model, the Plain Mask Transformer (PMT), preserves the architectural simplicity and low latency of encoder-only designs while keeping the encoder representation unchanged and shareable. The design seamlessly applies to both image and video segmentation, inheriting the generality of the encoder-only framework. On standard image segmentation benchmarks, PMT matches the frozen-encoder state of the art while running up to ~3x faster. For video segmentation, it even performs on par with fully finetuned methods, while being up to 8x faster than state-of-the-art frozen-encoder models. Code: https://github.com/tue-mps/pmt.

PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders

Abstract

Vision Foundation Models (VFMs) pre-trained at scale enable a single frozen encoder to serve multiple downstream tasks simultaneously. Recent VFM-based encoder-only models for image and video segmentation, such as EoMT and VidEoMT, achieve competitive accuracy with remarkably low latency, yet they require finetuning the encoder, sacrificing the multi-task encoder sharing that makes VFMs practically attractive for large-scale deployment. To reconcile encoder-only simplicity and speed with frozen VFM features, we propose the Plain Mask Decoder (PMD), a fast Transformer-based segmentation decoder that operates on top of frozen VFM features. The resulting model, the Plain Mask Transformer (PMT), preserves the architectural simplicity and low latency of encoder-only designs while keeping the encoder representation unchanged and shareable. The design seamlessly applies to both image and video segmentation, inheriting the generality of the encoder-only framework. On standard image segmentation benchmarks, PMT matches the frozen-encoder state of the art while running up to ~3x faster. For video segmentation, it even performs on par with fully finetuned methods, while being up to 8x faster than state-of-the-art frozen-encoder models. Code: https://github.com/tue-mps/pmt.

Paper Structure

This paper contains 12 sections, 2 equations, 2 figures, 10 tables.

Figures (2)

  • Figure 1: ViT-Adapter + Mask2Former vs. PMT (Ours). PMT exhibits a better trade-off between Panoptic Quality and FPS across different sizes of frozen DINOv3 simeoni2025dinov3 pre-trained ViTs dosovitskiy2021vit. Evaluated on COCO val2017lin2014coco, see \ref{['tab:model_size']}.
  • Figure 2: Plain Mask Transformer (PMT) Architecture. Instead of injecting the query tokens within the ViT encoder as in the encoder-only framework of EoMT and VidEoMT, we extract features at multiple encoder levels and feed them into an efficient segmentation decoder that processes queries and patch tokens in parallel. $\oplus$ denotes element-wise addition. $\odot$ denotes the dot product.