Table of Contents
Fetching ...

Efficient Onboard Spacecraft Pose Estimation with Event Cameras and Neuromorphic Hardware

Arunkumar Rathinam, Jules Lecomte, Jost Reelsen, Gregor Lenz, Axel von Arnim, Djamila Aouada

Abstract

Reliable relative pose estimation is a key enabler for autonomous rendezvous and proximity operations, yet space imagery is notoriously challenging due to extreme illumination, high contrast, and fast target motion. Event cameras provide asynchronous, change-driven measurements that can remain informative when frame-based imagery saturates or blurs, while neuromorphic processors can exploit sparse activations for low-latency, energy-efficient inferences. This paper presents a spacecraft 6-DoF pose-estimation pipeline that couples event-based vision with the BrainChip Akida neuromorphic processor. Using the SPADES dataset, we train compact MobileNet-style keypoint regression networks on lightweight event-frame representations, apply quantization-aware training (8/4-bit), and convert the models to Akida-compatible spiking neural networks. We benchmark three event representations and demonstrate real-time, low-power inference on Akida V1 hardware. We additionally design a heatmap-based model targeting Akida V2 and evaluate it on Akida Cloud, yielding improved pose accuracy. To our knowledge, this is the first end-to-end demonstration of spacecraft pose estimation running on Akida hardware, highlighting a practical route to low-latency, low-power perception for future autonomous space missions.

Efficient Onboard Spacecraft Pose Estimation with Event Cameras and Neuromorphic Hardware

Abstract

Reliable relative pose estimation is a key enabler for autonomous rendezvous and proximity operations, yet space imagery is notoriously challenging due to extreme illumination, high contrast, and fast target motion. Event cameras provide asynchronous, change-driven measurements that can remain informative when frame-based imagery saturates or blurs, while neuromorphic processors can exploit sparse activations for low-latency, energy-efficient inferences. This paper presents a spacecraft 6-DoF pose-estimation pipeline that couples event-based vision with the BrainChip Akida neuromorphic processor. Using the SPADES dataset, we train compact MobileNet-style keypoint regression networks on lightweight event-frame representations, apply quantization-aware training (8/4-bit), and convert the models to Akida-compatible spiking neural networks. We benchmark three event representations and demonstrate real-time, low-power inference on Akida V1 hardware. We additionally design a heatmap-based model targeting Akida V2 and evaluate it on Akida Cloud, yielding improved pose accuracy. To our knowledge, this is the first end-to-end demonstration of spacecraft pose estimation running on Akida hardware, highlighting a practical route to low-latency, low-power perception for future autonomous space missions.

Paper Structure

This paper contains 31 sections, 4 figures, 4 tables.

Figures (4)

  • Figure 1: Cumulative distribution of SPEED scores on the SPADES dataset. (a) shows the impact of quantization on various representations for Akida V1, while (b) compares the performance of quantized Akida V1 and V2 architectures.
  • Figure 2: Qualitative results on LNES. Rows show samples with high (a-b), median (c-d), and low (e-f) SPEED scores.
  • Figure 3: Comprehensive Quantization and Error Analysis: Top two rows display the performance impact of quantization across different representations. Bottom two rows Mean localization and orientation error binned by ground-truth object distance for full-precision (FP) and quantization-aware trained (QAT) models between V1 and V2 architectures. Shaded regions denote $\pm$1 standard deviation.
  • Figure 4: Scatter distribution of per-sample localization error versus orientation error for full-precision (FP) and quantization-aware trained (QAT) models across Akida-V1 and Akida-V2 architectures. Each point represents one test sample, coloured by input representation.