Table of Contents
Fetching ...

ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

Young-Chae Son, Dae-Kwan Ko, Yoon-Ji Choi, Soo-Chul Lim

Abstract

In recent human-robot collaboration environments, there is a growing focus on integrating diverse sensor data beyond visual information to enable safer and more intelligent task execution. Although thermal data can be crucial for enhancing robot safety and operational efficiency, its integration has been relatively overlooked in prior research. This paper proposes a novel Vision-Language-Action (VLA) framework that incorporates thermal information for robot task execution. The proposed system leverages a Vision-Language Model (VLM) as a high-level planner to interpret complex natural language commands and decompose them into simpler sub-tasks. This approach facilitates efficient data collection and robust reasoning for complex operations. Unlike conventional methods that rely solely on visual data, our approach integrates thermal information, enabling the robot to perceive physical properties and proactively ensure environmental safety. Experimental results from real-world task scenarios validate the feasibility of our proposed framework, suggesting its potential to enhance task success rates and safety compared to existing vision-based systems.

ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

Abstract

In recent human-robot collaboration environments, there is a growing focus on integrating diverse sensor data beyond visual information to enable safer and more intelligent task execution. Although thermal data can be crucial for enhancing robot safety and operational efficiency, its integration has been relatively overlooked in prior research. This paper proposes a novel Vision-Language-Action (VLA) framework that incorporates thermal information for robot task execution. The proposed system leverages a Vision-Language Model (VLM) as a high-level planner to interpret complex natural language commands and decompose them into simpler sub-tasks. This approach facilitates efficient data collection and robust reasoning for complex operations. Unlike conventional methods that rely solely on visual data, our approach integrates thermal information, enabling the robot to perceive physical properties and proactively ensure environmental safety. Experimental results from real-world task scenarios validate the feasibility of our proposed framework, suggesting its potential to enhance task success rates and safety compared to existing vision-based systems.

Paper Structure

This paper contains 18 sections, 2 equations, 5 figures, 2 tables.

Figures (5)

  • Figure 1: We propose ThermoAct. (a) illustrates a VLM Planner that decomposes a high-level user instruction into specific sub-task descriptions. (b) depicts a VLA Executor that receives these descriptions as input prompts to predict low-level actions. By leveraging temperature cues from thermal imaging, ThermoAct is able to perform temperature-aware tasks beyond existing approaches.
  • Figure 2: Hierarchical Collaboration between VLM Planner and VLA Executor. (a) The VLM Planner receives RGB-Thermal images and a structured guideline prompt containing role definitions and output examples. (b) Based on the thermal information, the VLM analyzes the environment context and decomposes the instruction into executable sub-tasks. (c) Sub-task Decomposition with VLM and Action Execution with VLA.
  • Figure 3: The figure shows five main task environments (Tasks 1--5), with the actual thermal input images displayed above each task. Tasks 1--3 correspond to daily-life manipulation tasks, while Tasks 4--5 focus on safety-related scenarios.
  • Figure 4: Process of Task 5, where the system turns off the heated hair straightener and successfully generalizes to unseen ones using the learned data.
  • Figure 5: Performance on subtasks requiring thermal perception, including picking up warm water (Task 1), picking up a cold Coke (Task 2), placing an object into the appropriate cup (Task 3), picking up a heated battery (Task 4), and turning off a hair straightener (Task 5).