DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning
Xiaohan Zhang, Zainab Altaweel, Yohei Hayamizu, Yan Ding, Saeid Amiri, Hao Yang, Andy Kaminski, Chad Esselink, Shiqi Zhang
TL;DR
DKPrompt tackles open-world robot planning by grounding symbolic plans in perception through domain-knowledge prompts derived from PDDL. It prompts vision-language models to verify action preconditions and postconditions, triggering replanning when discrepancies are detected, thereby robustly handling unforeseen situations. Across OmniGibson simulations and real-robot trials, DKPrompt outperforms pure VLM planning and classical planning baselines, with ablation results showing the combined use of preconditions and effects is essential. The approach provides a practical bridge between symbolic planning and perception, enabling more reliable long-horizon robotic tasks in open environments and offering an evaluation platform for future work.
Abstract
Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.
