Table of Contents
Fetching ...

Detection of Adversarial Attacks in Robotic Perception

Ziad Sharawy, Mohammad Nakshbandi, Sorin Mihai Grigorescu

Abstract

Deep Neural Networks (DNNs) achieve strong performance in semantic segmentation for robotic perception but remain vulnerable to adversarial attacks, threatening safety-critical applications. While robustness has been studied for image classification, semantic segmentation in robotic contexts requires specialized architectures and detection strategies.

Detection of Adversarial Attacks in Robotic Perception

Abstract

Deep Neural Networks (DNNs) achieve strong performance in semantic segmentation for robotic perception but remain vulnerable to adversarial attacks, threatening safety-critical applications. While robustness has been studied for image classification, semantic segmentation in robotic contexts requires specialized architectures and detection strategies.

Paper Structure

This paper contains 8 sections, 1 equation, 5 figures, 3 tables.

Figures (5)

  • Figure 1: Classifier predictions on clean and adversarial images. (a) ResNet‑18 shows unstable predictions with frequent misclassifications. (b) ResNet‑50 demonstrates robust, consistent performance against adversarial perturbations.
  • Figure 2: Effect of increasing FGSM attack strength ($\epsilon$) on segmentation performance. As $\epsilon$ rises, accuracy and mIoU drop, and several classes are completely lost.
  • Figure 3: ResNet-18 validation performance across epochs 1--10, including accuracy, precision, recall, F1-score, and loss.
  • Figure 4: ResNet-50 validation performance across epochs 1--10, including accuracy, precision, recall, F1-score, and loss.
  • Figure 5: Visualization of FGSM adversarial examples goodfellow2015explaining on a DeepLabV3+ chen2018encoder model with ResNet-18 he2016deepdeng2009imagenetcordts2016cityscapes, showing clean and adversarial inputs with increasing $\epsilon$, paszke2019pytorch.