Table of Contents
Fetching ...

Video models are zero-shot learners and reasoners

Thaddäus Wiedemer, Yuxuan Li, Paul Vicol, Shixiang Shane Gu, Nick Matarese, Kevin Swersky, Been Kim, Priyank Jaini, Robert Geirhos

TL;DR

This work investigates whether large-scale video models can function as general-purpose vision foundation models akin to LLMs in NLP. By prompting Veo 3 without task-specific fine-tuning, the authors demonstrate zero-shot capabilities across perception, intuitive physics, object manipulation, and temporal-spatial reasoning (cof-based). They analyze tens of thousands of generated videos across dozens of tasks, showing consistent improvements from Veo 2 to Veo 3 and highlighting both the promise and current limits of zero-shot visual reasoning. The findings suggest a path toward unified, generalist vision models, facilitated by prompting strategies and scaling, with important implications for future research and application. The work also discusses costs, benchmarking caveats, and the potential trajectory toward a GPT-3-like paradigm for computer vision.

Abstract

The remarkable zero-shot capabilities of Large Language Models (LLMs) have propelled natural language processing from task-specific models to unified, generalist foundation models. This transformation emerged from simple primitives: large, generative models trained on web-scale data. Curiously, the same primitives apply to today's generative video models. Could video models be on a trajectory towards general-purpose vision understanding, much like LLMs developed general-purpose language understanding? We demonstrate that Veo 3 can solve a broad variety of tasks it wasn't explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and more. These abilities to perceive, model, and manipulate the visual world enable early forms of visual reasoning like maze and symmetry solving. Veo's emergent zero-shot capabilities indicate that video models are on a path to becoming unified, generalist vision foundation models.

Video models are zero-shot learners and reasoners

TL;DR

This work investigates whether large-scale video models can function as general-purpose vision foundation models akin to LLMs in NLP. By prompting Veo 3 without task-specific fine-tuning, the authors demonstrate zero-shot capabilities across perception, intuitive physics, object manipulation, and temporal-spatial reasoning (cof-based). They analyze tens of thousands of generated videos across dozens of tasks, showing consistent improvements from Veo 2 to Veo 3 and highlighting both the promise and current limits of zero-shot visual reasoning. The findings suggest a path toward unified, generalist vision models, facilitated by prompting strategies and scaling, with important implications for future research and application. The work also discusses costs, benchmarking caveats, and the potential trajectory toward a GPT-3-like paradigm for computer vision.

Abstract

The remarkable zero-shot capabilities of Large Language Models (LLMs) have propelled natural language processing from task-specific models to unified, generalist foundation models. This transformation emerged from simple primitives: large, generative models trained on web-scale data. Curiously, the same primitives apply to today's generative video models. Could video models be on a trajectory towards general-purpose vision understanding, much like LLMs developed general-purpose language understanding? We demonstrate that Veo 3 can solve a broad variety of tasks it wasn't explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and more. These abilities to perceive, model, and manipulate the visual world enable early forms of visual reasoning like maze and symmetry solving. Veo's emergent zero-shot capabilities indicate that video models are on a path to becoming unified, generalist vision foundation models.

Paper Structure

This paper contains 70 sections, 77 figures, 2 tables.

Figures (77)

  • Figure 1: A qualitative overview of Veo 3's zero-shot abilities. The plot shows Veo 3's success rate across 12 samples as a rough estimate of model performance on 62 tasks across the vision stack. Tasks are described in \ref{['sec:qualitative_results']} and shown in \ref{['app:qualitative']}. Find videos of all tasks on our https://video-zero-shot.github.io/.
  • Figure 2: Veo 3 zero-shot learning and reasoning examples. From classic perceptual tasks (superresolution, visual search) to modeling (buoyancy, memory of world states after zooming in), manipulation (pose editing, simulating dexterous manipulation) and visual reasoning (navigation, rule extrapolation): Veo 3 can zero-shot solve a host of visual tasks that are specified as an input image and a text prompt. Examples are shown in \ref{['app:qualitative']}; videos of all tasks are on our https://video-zero-shot.github.io/.
  • Figure 3: Edge detection on all 50 test images from BIPEDv2 soria2020dexinedsoria2023dexined_ext. We generate 10 videos per sample and report best performance over $k$ attempts as a function of $k$. Prompt: "All edges in this image become more salient by transforming into black outlines. Then, all objects fade away [...]" Details & full prompt: \ref{['app:edge']}.
  • Figure 4: Class-agnostic instance segmentation on a subset of 50 easy images (1-3 large objects) from LVIS gupta2019lvis. Prompt: "[...] each distinct entity is overlaid in a different flat color [...] the background fades to {white, green} [...]" Details & full prompt: \ref{['app:segmentation']}.
  • Figure 5: Object extraction on an animal dataset. Prompt: "The background changes to white [...] all animals line up in a row [...]" Details & full prompt: \ref{['app:counting']}.
  • ...and 72 more figures