Table of Contents
Fetching ...

Saranga: MilliWatt Ultrasound for Navigation in Visually Degraded Environments on Palm-Sized Aerial Robots

Manoj Velmurugan, Phillip Brush, Colin Balfour, Richard J. Przybyla, Nitin J. Sanket

Abstract

Tiny palm-sized aerial robots possess exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite on-board the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)-denied wild scenes. Common methods for obstacle avoidance use cameras and LIght Detection And Ranging (LIDAR), which become ineffective in visually degraded conditions such as low visibility, dust, fog or darkness. Other sensors, such as RAdio Detection And Ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low Peak Signal-to-Noise Ratio of $-4.9$ decibels: physical noise reduction and a deep learning based denoising method. Firstly, we present a practical way to block propeller induced ultrasound noise on the weak echoes. The second solution is to train a neural network to utilize the \textcolor{black}{long horizon of ultrasound echoes} for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalize to the real world by using a synthetic data generation pipeline and limited real noise data for training. We enable a palm-sized aerial robot to navigate in visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only on-board sensing and computation. We provide extensive real world results to demonstrate the efficacy of our approach.

Saranga: MilliWatt Ultrasound for Navigation in Visually Degraded Environments on Palm-Sized Aerial Robots

Abstract

Tiny palm-sized aerial robots possess exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite on-board the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)-denied wild scenes. Common methods for obstacle avoidance use cameras and LIght Detection And Ranging (LIDAR), which become ineffective in visually degraded conditions such as low visibility, dust, fog or darkness. Other sensors, such as RAdio Detection And Ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low Peak Signal-to-Noise Ratio of decibels: physical noise reduction and a deep learning based denoising method. Firstly, we present a practical way to block propeller induced ultrasound noise on the weak echoes. The second solution is to train a neural network to utilize the \textcolor{black}{long horizon of ultrasound echoes} for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalize to the real world by using a synthetic data generation pipeline and limited real noise data for training. We enable a palm-sized aerial robot to navigate in visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only on-board sensing and computation. We provide extensive real world results to demonstrate the efficacy of our approach.

Paper Structure

This paper contains 44 sections, 16 equations, 11 figures, 1 table, 2 algorithms.

Figures (11)

  • Figure 1: System overview and sensor comparison. (A) Navigation in darkness, clutter, snow, fog, and with transparent/thin obstacles. (B) Sensor comparison where lower weight, size, cost, power consumption and higher range are desirable qualities. (C) Sensor performance across environmental conditions (snow, fog, darkness, glass, plastic and featureless). (A larger area occupied by a sensor means better performance.) (D) Ultrasonic processing pipeline from raw echo signals to velocity commands.
  • Figure 2: Tabulation of navigation performance. Variation of performance across diverse challenging scenarios with varying obstacles and environmental conditions.
  • Figure 3: Aerial robot navigation through various indoor scenes: (A) Transparent obstacles (Blue highlights show the boundary of transparent film), (B) Thin objects, (C) Snow, (D) Fog, (E) Low light (dark) conditions. For each environment, (i) are the perspective view, (ii) shows the front view. For both, the opacity shows the time progression, where lower opacity is closer to the beginning of the trajectory. The scale bar for the trajectories is shown in E(i).
  • Figure 4: Aerial robot navigation through a complex 3D indoor scene with smoke, snow, textureless and thin obstacles. (A) Top view (robot motion is right to left), (B) Back view (robot motion is towards camera), (C) Front view (robot motion is away from camera), (D) 3D digital twin created using VizFlyt vizflyt2025 during well-lit conditions; The camera frustums show the respective camera locations (top, back and front). The fog and snow icons in (D) show the locations of fog and snow machines, respectively. (E) Tri-ultrasound setup is used in the 3D experiment to enable trilateration for 3D obstacle avoidance. Red ellipses show the ultrasound sensors. Opacity shows the time progression.
  • Figure 5: (A--C) Aerial robot navigation in outdoor forest environments. Insets show 3D reconstructions from onboard camera footage using Structure-from-Motion (SfM) to serve as a reference. Red to yellow and opacity increase show time progression. The robot is highlighted in red for the sake of clarity.
  • ...and 6 more figures