Table of Contents
Fetching ...

The Progression of Transformers from Language to Vision to MOT: A Literature Review on Multi-Object Tracking with Transformers

Abhi Kamboj

TL;DR

This review tracks the trajectory of transformer architectures from language to vision and finally to multi-object tracking, highlighting pivotal models like ViT, DETR, and Deformable DETR. It emphasizes that, despite significant advances in vision and object recognition, state-of-the-art MOT is still largely dominated by non-transformer methods due to efficiency and data considerations. The survey catalogs transformer-based MOT efforts (e.g., TransTrack, Trackformer, MOTR) and contrasts them with strong non-transformer trackers (SORT, ByteTrack, StrongSORT), underscoring the ongoing research space for integrating tracking-specific temporal structure into transformer frameworks. Overall, the work maps the potential of track/query-focused transformer designs while acknowledging their current practical limitations and the field’s active development.

Abstract

The transformer neural network architecture allows for autoregressive sequence-to-sequence modeling through the use of attention layers. It was originally created with the application of machine translation but has revolutionized natural language processing. Recently, transformers have also been applied across a wide variety of pattern recognition tasks, particularly in computer vision. In this literature review, we describe major advances in computer vision utilizing transformers. We then focus specifically on Multi-Object Tracking (MOT) and discuss how transformers are increasingly becoming competitive in state-of-the-art MOT works, yet still lag behind traditional deep learning methods.

The Progression of Transformers from Language to Vision to MOT: A Literature Review on Multi-Object Tracking with Transformers

TL;DR

This review tracks the trajectory of transformer architectures from language to vision and finally to multi-object tracking, highlighting pivotal models like ViT, DETR, and Deformable DETR. It emphasizes that, despite significant advances in vision and object recognition, state-of-the-art MOT is still largely dominated by non-transformer methods due to efficiency and data considerations. The survey catalogs transformer-based MOT efforts (e.g., TransTrack, Trackformer, MOTR) and contrasts them with strong non-transformer trackers (SORT, ByteTrack, StrongSORT), underscoring the ongoing research space for integrating tracking-specific temporal structure into transformer frameworks. Overall, the work maps the potential of track/query-focused transformer designs while acknowledging their current practical limitations and the field’s active development.

Abstract

The transformer neural network architecture allows for autoregressive sequence-to-sequence modeling through the use of attention layers. It was originally created with the application of machine translation but has revolutionized natural language processing. Recently, transformers have also been applied across a wide variety of pattern recognition tasks, particularly in computer vision. In this literature review, we describe major advances in computer vision utilizing transformers. We then focus specifically on Multi-Object Tracking (MOT) and discuss how transformers are increasingly becoming competitive in state-of-the-art MOT works, yet still lag behind traditional deep learning methods.

Paper Structure

This paper contains 14 sections, 9 figures.

Figures (9)

  • Figure 1: Transformer Architecture Image taken from original CLIP paper, please refer to vaswani2017attention for more details.
  • Figure 2: ViT Overview ViT splits an image into fixed sizes, embeds each chunk, adds positional encoding and feeds them through a standard transformer. Figure from dosovitskiy2020image
  • Figure 3: ViT Performance in CLIP CLIP trained with a ViT performance best compared to other state of the art models. Image taken from original CLIP paper, please refer to radford2021learning for more details.
  • Figure 4: DETR Architecture First features are extracted using a CNN backbone, and positional encodings are concatenated to the features. Then features are fed into the encoder and used in the decoded along with learned object queries. Finally, the output s from the decoder are passed through a shared feedforward network that predicts either a detection (class and bounding box) or no detection (through the "no object" class). Image taken from original DETR paper, please refer to carion2020end for more details.
  • Figure 5: The left shows visualizations of detection using attention presented in DETR carion2020end. The right shows the speedup of deformable DETR from zhu2020deformable.
  • ...and 4 more figures