Table of Contents
Fetching ...

OpenGo: An OpenClaw-Based Robotic Dog with Real-Time Skill Switching

Hanbing Li, Xuewei Cao, Zhiwen Zeng, Yuhan Wu, Yanyong Zhang, Yan Xia

Abstract

Adaptation to complex tasks and multiple scenarios remains a significant challenge for a single robot agent. The ability to acquire organize, and switch between a wide range of skills in real time, particularly in dynamic environments, has become a fundamental requirement for embodied intelligence. We introduce OpenGo, an OpenClaw-powered embodied robotic dog capable of switching skills in real time according to the scene and task instructions. Specifically, the agent is equipped with (1) a customizable skill library with easy skill import and autonomous skill validation, (2) a dispatcher that selects and invokes different skills according to task prompts or language instructions, and (3) a self-learning framework that fine-tunes skills based on task completion and human feedback. We deploy the agent in Unitree's Go2 robotic dog and validate its capabilities in self-checking and switching of skills autonomously. In addition, by integrating Feishu-platform communication, we enable natural-language guidance and human feedback, allowing inexperienced users to control the robotic dog through simple instructions.

OpenGo: An OpenClaw-Based Robotic Dog with Real-Time Skill Switching

Abstract

Adaptation to complex tasks and multiple scenarios remains a significant challenge for a single robot agent. The ability to acquire organize, and switch between a wide range of skills in real time, particularly in dynamic environments, has become a fundamental requirement for embodied intelligence. We introduce OpenGo, an OpenClaw-powered embodied robotic dog capable of switching skills in real time according to the scene and task instructions. Specifically, the agent is equipped with (1) a customizable skill library with easy skill import and autonomous skill validation, (2) a dispatcher that selects and invokes different skills according to task prompts or language instructions, and (3) a self-learning framework that fine-tunes skills based on task completion and human feedback. We deploy the agent in Unitree's Go2 robotic dog and validate its capabilities in self-checking and switching of skills autonomously. In addition, by integrating Feishu-platform communication, we enable natural-language guidance and human feedback, allowing inexperienced users to control the robotic dog through simple instructions.

Paper Structure

This paper contains 16 sections, 1 equation, 6 figures.

Figures (6)

  • Figure 1: Demonstration of LLM-driven skill execution and composition on the quadruped robot. Given natural language instructions (left), the system interprets user order and transmit it to Unitree Go2. From top to bottom, the commands are: Move Forward, Turn Around, Backflip, and Dance.
  • Figure 2: Overview of the OpenGo framework. OpenGo is built upon OpenClaw and organized around two core modules, namely the Dispatcher and Memory/State Check. Through a communication platform (e.g., Feishu), human users provide task descriptions, instructions, and execution orders to the system. The Dispatcher selects appropriate skills from the robot-side skill library, while the Memory/State Check module monitors execution status and feeds state information back to support closed-loop decision making. On the robot side, the framework interfaces with perception, controller, state estimation, skill library, and an internal Safety Tool, which serves as an emergency-stop trigger when the robot enters a dangerous state, thereby enabling controllable and robust skill execution on the quadruped platform.
  • Figure 3: Skill library design in OpenGo. New skills are incorporated through code review and simulation-based validation before entering the skill library. Each skill is organized with structured fields, including skill heads, parameters, constraints, function, and prompts. The parameters are adjustable for task adaptation, whereas the function remains fixed for stable and controllable execution.
  • Figure 4: Dispatch mechanism of OpenGo. Task descriptions, human instructions, and scene information inferred from perception are jointly provided to the LLM-based dispatcher. The LLM selects appropriate skills from the skill library and organizes them into a step-by-step execution sequence. Execution feedback, including error logs and finished signals, is then returned to the dispatcher, enabling dynamic replanning and closed-loop skill scheduling.
  • Figure 5: System Latency Analysis of Single-Skill. Experiments are conducted on a real-world Unitree Go2 platform. Each action is executed 10 times, and the response time is measured from the moment the user issues the instruction to the initiation of the robot’s execution.
  • ...and 1 more figures