Table of Contents
Fetching ...

Towards Open-World Grasping with Large Vision-Language Models

Georgios Tziafas, Hamidreza Kasaei

TL;DR

Open-world grasping is formulated as a policy $\pi(a_t \mid I_t, D_t, T)$ that grounds language in perception, plans actions, and executes grasps. OWG integrates a segmentation model, a vision-language model, and a grasp-synthesis network in a three-stage pipeline—open-ended referring segmentation, grounded grasp planning, and grasp ranking via contact reasoning—for zero-shot operation in clutter. Experiments show superior grounding accuracy and higher grasp success rates in both simulation and real-world hardware compared with prior zero-shot and end-to-end baselines, demonstrating robust language-conditioned manipulation in open environments. This work highlights the value of closed-loop planning with semantic-geometry integration for flexible, recoverable robotic grasping in unstructured settings.

Abstract

The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics. An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios. Recent works exploit the web-scale knowledge inherent in large language models (LLMs) to plan and reason in robotic context, but rely on external vision and action models to ground such knowledge into the environment and parameterize actuation. This setup suffers from two major bottlenecks: a) the LLM's reasoning capacity is constrained by the quality of visual grounding, and b) LLMs do not contain low-level spatial understanding of the world, which is essential for grasping in contact-rich scenarios. In this work we demonstrate that modern vision-language models (VLMs) are capable of tackling such limitations, as they are implicitly grounded and can jointly reason about semantics and geometry. We propose OWG, an open-world grasping pipeline that combines VLMs with segmentation and grasp synthesis models to unlock grounded world understanding in three stages: open-ended referring segmentation, grounded grasp planning and grasp ranking via contact reasoning, all of which can be applied zero-shot via suitable visual prompting mechanisms. We conduct extensive evaluation in cluttered indoor scene datasets to showcase OWG's robustness in grounding from open-ended language, as well as open-world robotic grasping experiments in both simulation and hardware that demonstrate superior performance compared to previous supervised and zero-shot LLM-based methods. Project material is available at https://gtziafas.github.io/OWG_project/ .

Towards Open-World Grasping with Large Vision-Language Models

TL;DR

Open-world grasping is formulated as a policy that grounds language in perception, plans actions, and executes grasps. OWG integrates a segmentation model, a vision-language model, and a grasp-synthesis network in a three-stage pipeline—open-ended referring segmentation, grounded grasp planning, and grasp ranking via contact reasoning—for zero-shot operation in clutter. Experiments show superior grounding accuracy and higher grasp success rates in both simulation and real-world hardware compared with prior zero-shot and end-to-end baselines, demonstrating robust language-conditioned manipulation in open environments. This work highlights the value of closed-loop planning with semantic-geometry integration for flexible, recoverable robotic grasping in unstructured settings.

Abstract

The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics. An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios. Recent works exploit the web-scale knowledge inherent in large language models (LLMs) to plan and reason in robotic context, but rely on external vision and action models to ground such knowledge into the environment and parameterize actuation. This setup suffers from two major bottlenecks: a) the LLM's reasoning capacity is constrained by the quality of visual grounding, and b) LLMs do not contain low-level spatial understanding of the world, which is essential for grasping in contact-rich scenarios. In this work we demonstrate that modern vision-language models (VLMs) are capable of tackling such limitations, as they are implicitly grounded and can jointly reason about semantics and geometry. We propose OWG, an open-world grasping pipeline that combines VLMs with segmentation and grasp synthesis models to unlock grounded world understanding in three stages: open-ended referring segmentation, grounded grasp planning and grasp ranking via contact reasoning, all of which can be applied zero-shot via suitable visual prompting mechanisms. We conduct extensive evaluation in cluttered indoor scene datasets to showcase OWG's robustness in grounding from open-ended language, as well as open-world robotic grasping experiments in both simulation and hardware that demonstrate superior performance compared to previous supervised and zero-shot LLM-based methods. Project material is available at https://gtziafas.github.io/OWG_project/ .

Paper Structure

This paper contains 10 sections, 6 figures, 4 tables, 1 algorithm.

Figures (6)

  • Figure 1: Challenges of open-world grasping tackled with VLMs. The overall pipeline combines VLMs with segmentation and grasp synthesis models to ground open-ended language instructions plan and reason about how to grasp the desired object.
  • Figure 2: Overview of OWG: Given a user instruction and an observation, OWG first invokes a segmentation model to recover pixel-level masks, and overlays them with numeric IDs as visual markers in a new image. Then the VLM subsequently activates three stages: (i) grounding the target object from the language expression in the marked image, (ii) planning on whether it should grasp the target or remove a surrounding object, and (iii) invoking a grasp synthesis model to generate grasps and ranking them according to the object's shape and neighbouring information. The best grasp pose (highlighted here in pink - not part of the prompt) is executed and the observation is updated for a new run, until the target object is grasped. Best viewed in color and zoom.
  • Figure 3: Example GPT-4v responses (from left to right): a) Open-ended referring segmentation, i.e., grounding, b) Grounded grasp planning, and c) Grasp ranking via contact reasoning. We omit parts of the prompt and response for brievity. Full prompts in Appendix A and more example responses in Appendix E.
  • Figure 4: Open-ended language-guided grasping trials in Gazebo (top) and real robot (bottom), in isolated (left column) and cluttered (right column) scenes.
  • Figure 5: Distribution of failures across grounding and grasping in Gazebo grasping trials for isolated (left) and cluttered (right). OWG improves performance across both modes in both setups and test splits.
  • ...and 1 more figures