Table of Contents
Fetching ...

Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems

Maher Al Islam, Amr S. El-Wakeel

Abstract

Autonomous vehicles increasingly rely on deep learning-based perception and control, which impose substantial computational demands. Cloud-assisted architectures offload these functions to remote servers, enabling enhanced perception and coordinated decision-making through the Internet of Vehicles (IoV). However, this paradigm introduces cross-layer vulnerabilities, where adversarial manipulation of perception models and network impairments in the vehicle-cloud link can jointly undermine safety-critical autonomy. This paper presents a hardware-in-the-loop IoV testbed that integrates real-time perception, control, and communication to evaluate such vulnerabilities in cloud-assisted autonomous driving. A YOLOv8-based object detector deployed on the cloud is subjected to whitebox adversarial attacks using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), while network adversaries induce delay and packet loss in the vehicle-cloud loop. Results show that adversarial perturbations significantly degrade perception performance, with PGD reducing detection precision and recall from 0.73 and 0.68 in the clean baseline to 0.22 and 0.15 at epsilon= 0.04. Network delays of 150-250 ms, corresponding to transient losses of approximately 3-4 frames, and packet loss rates of 0.5-5 % further destabilize closed-loop control, leading to delayed actuation and rule violations. These findings highlight the need for cross-layer resilience in cloud-assisted autonomous driving systems.

Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems

Abstract

Autonomous vehicles increasingly rely on deep learning-based perception and control, which impose substantial computational demands. Cloud-assisted architectures offload these functions to remote servers, enabling enhanced perception and coordinated decision-making through the Internet of Vehicles (IoV). However, this paradigm introduces cross-layer vulnerabilities, where adversarial manipulation of perception models and network impairments in the vehicle-cloud link can jointly undermine safety-critical autonomy. This paper presents a hardware-in-the-loop IoV testbed that integrates real-time perception, control, and communication to evaluate such vulnerabilities in cloud-assisted autonomous driving. A YOLOv8-based object detector deployed on the cloud is subjected to whitebox adversarial attacks using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), while network adversaries induce delay and packet loss in the vehicle-cloud loop. Results show that adversarial perturbations significantly degrade perception performance, with PGD reducing detection precision and recall from 0.73 and 0.68 in the clean baseline to 0.22 and 0.15 at epsilon= 0.04. Network delays of 150-250 ms, corresponding to transient losses of approximately 3-4 frames, and packet loss rates of 0.5-5 % further destabilize closed-loop control, leading to delayed actuation and rule violations. These findings highlight the need for cross-layer resilience in cloud-assisted autonomous driving systems.

Paper Structure

This paper contains 16 sections, 3 equations, 6 figures, 1 table.

Figures (6)

  • Figure 1: Architecture of the cloud-assisted autonomous driving testbed, illustrating the interaction between the vehicle, cloud vision–control loop, and adversarial agents.
  • Figure 2: Wireshark capture showing TCP communication between the vehicle (10.70.111.11) and the cloud server (10.70.111.40) on port 5001. The highlighted frame reveals JPEG file transmission with identifiable magic bytes, confirming image payload exchange in the vehicle–cloud loop.
  • Figure 3: The clean image serves as the baseline, followed by FGSM and PGD attacks with increasing perturbation magnitudes ($\epsilon$ = 0.01, 0.02, 0.04). Adversarial perturbations lead to missed vehicle and traffic light detections as well as false positives, demonstrating progressive degradation in detection reliability.
  • Figure 4: Precision–Recall comparison under clean and adversarial conditions (FGSM & PGD) across $\epsilon = \{0.01, 0.02, 0.04\}$.
  • Figure 5: Confusion matrices for Clean, FGSM and PGD scenarios under $\epsilon = 0.02$.
  • ...and 1 more figures