Table of Contents
Fetching ...

DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning

Xiaohan Zhang, Zainab Altaweel, Yohei Hayamizu, Yan Ding, Saeid Amiri, Hao Yang, Andy Kaminski, Chad Esselink, Shiqi Zhang

TL;DR

DKPrompt tackles open-world robot planning by grounding symbolic plans in perception through domain-knowledge prompts derived from PDDL. It prompts vision-language models to verify action preconditions and postconditions, triggering replanning when discrepancies are detected, thereby robustly handling unforeseen situations. Across OmniGibson simulations and real-robot trials, DKPrompt outperforms pure VLM planning and classical planning baselines, with ablation results showing the combined use of preconditions and effects is essential. The approach provides a practical bridge between symbolic planning and perception, enabling more reliable long-horizon robotic tasks in open environments and offering an evaluation platform for future work.

Abstract

Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.

DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning

TL;DR

DKPrompt tackles open-world robot planning by grounding symbolic plans in perception through domain-knowledge prompts derived from PDDL. It prompts vision-language models to verify action preconditions and postconditions, triggering replanning when discrepancies are detected, thereby robustly handling unforeseen situations. Across OmniGibson simulations and real-robot trials, DKPrompt outperforms pure VLM planning and classical planning baselines, with ablation results showing the combined use of preconditions and effects is essential. The approach provides a practical bridge between symbolic planning and perception, enabling more reliable long-horizon robotic tasks in open environments and offering an evaluation platform for future work.

Abstract

Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.

Paper Structure

This paper contains 17 sections, 9 figures, 7 tables.

Figures (9)

  • Figure 1: A few unforeseen situations during action execution. In the top-left example, the robot "opened" the cabinet door to get prepared for grasping the cup. It was expected that the cup in white would have been in the robot's view after the "opening" action, while a situation occurred, i.e., the cabinet was only half-open. DKPrompt prompts vision-language models (VLMs) using domain knowledge to detect and address such situations. While one can develop a safeguard to detect cabinet opening being successful, our goal is to automate this process, avoiding such manual efforts and handling unforeseen situations.
  • Figure 2: An overview of DKPrompt. By simply querying the robot's current observation against the domain knowledge (i.e., action preconditions and effects) as VQA tasks, DKPrompt can call the classical planner to generate a new valid plan using updated world states. Note that DKPrompt only queries about predicates. The left shows how DKPrompt checks every precondition of the action to be executed next, and the right shows how it verifies the expected action effects are all in place after action execution. Replanning is triggered when preconditions or effects are unsatisfied after updating the planner's action knowledge.
  • Figure 3: DKPrompt v.s. baselines in success rate over five everyday tasks.
  • Figure 4: Performance of off-the-shelf VLMs.
  • Figure 5: Screenshots showing the full demonstration trial of DKPrompt as applied to a real robot.
  • ...and 4 more figures