Table of Contents
Fetching ...

Learning Compact Terrain-Context Representations for Feasibility-Aware Offline Reinforcement Learning in UAV Relaying Networks

Joseanne Viana, Viswak R Balaji, Boris Galkin, Lester Ho, Holger Claussen

Abstract

Offline reinforcement learning (RL) is an attractive tool for unmanned aerial vehicle (UAV) systems, where online exploration is costly and raises safety concerns. In terrain-aware UAV relaying, agents may observe high-dimensional inputs such as terrain and land-cover maps, which describe the propagation environment, but complicate offline learning from fixed datasets. This paper investigates the impact of compact state representations on offline RL for UAV relaying. End-to-end service is jointly constrained by UAV--user access links and a base-station--to--UAV backhaul link, yielding feasibility limits driven by user mobility and independent of UAV control. To distinguish feasibility limits from control-induced sub-optimality, a candidate-set feasibility upper bound (CS-FUB) is introduced, which estimates the maximum achievable user coverage over a restricted set of UAV placements. To address high-dimensional terrain context, map-like observations are compressed into low-dimensional latent representations using a variational autoencoder (VAE) and policies are trained via Conservative Q-Learning (CQL). Simulation results show that training CQL directly on raw high-dimensional terrain-context states leads to slow convergence and large feasibility gaps. In contrast, VAE-encoded representations improve learning stability, enable earlier convergence to feasible relay configurations, and reduce sub-optimality relative to physical limits. Comparisons with autoencoder and linear compression baselines further demonstrate the benefit of structured representation learning for effective offline RL in terrain-aware UAV systems.

Learning Compact Terrain-Context Representations for Feasibility-Aware Offline Reinforcement Learning in UAV Relaying Networks

Abstract

Offline reinforcement learning (RL) is an attractive tool for unmanned aerial vehicle (UAV) systems, where online exploration is costly and raises safety concerns. In terrain-aware UAV relaying, agents may observe high-dimensional inputs such as terrain and land-cover maps, which describe the propagation environment, but complicate offline learning from fixed datasets. This paper investigates the impact of compact state representations on offline RL for UAV relaying. End-to-end service is jointly constrained by UAV--user access links and a base-station--to--UAV backhaul link, yielding feasibility limits driven by user mobility and independent of UAV control. To distinguish feasibility limits from control-induced sub-optimality, a candidate-set feasibility upper bound (CS-FUB) is introduced, which estimates the maximum achievable user coverage over a restricted set of UAV placements. To address high-dimensional terrain context, map-like observations are compressed into low-dimensional latent representations using a variational autoencoder (VAE) and policies are trained via Conservative Q-Learning (CQL). Simulation results show that training CQL directly on raw high-dimensional terrain-context states leads to slow convergence and large feasibility gaps. In contrast, VAE-encoded representations improve learning stability, enable earlier convergence to feasible relay configurations, and reduce sub-optimality relative to physical limits. Comparisons with autoencoder and linear compression baselines further demonstrate the benefit of structured representation learning for effective offline RL in terrain-aware UAV systems.

Paper Structure

This paper contains 23 sections, 7 equations, 4 figures, 1 table.

Figures (4)

  • Figure 1: CS-FUB illustration. Candidate UAV placements $\mathcal{P}_t$ and optimal location $p^\star$ are shown. Served and unserved users are indicated in green and red, respectively. The dashed blue contour denotes the coverage boundary (e.g., $-90$ dBm), highlighting geometry- and propagation-induced limits.
  • Figure 2: Comparison of average served users, peak served users, and discounted (CS-FUB-normalized) performance for latent and raw-state policies.
  • Figure 3: CS-FUB feasibility breakdown showing the fraction of time steps where full, partial, or no user service is achievable under the candidate-set feasibility upper bound.
  • Figure 4: CDF of time steps required to reach a feasible target service level (when CS-FUB indicates feasibility). Latent policies reach feasible service significantly faster.