Table of Contents
Fetching ...

ROAST: Risk-aware Outlier-exposure for Adversarial Selective Training of Anomaly Detectors Against Evasion Attacks

Mohammed Elnawawy, Gargi Mitra, Shahrear Iqbal, Karthik Pattabiraman

Abstract

Safety-critical domains like healthcare rely on deep neural networks (DNNs) for prediction, yet DNNs remain vulnerable to evasion attacks. Anomaly detectors (ADs) are widely used to protect DNNs, but conventional ADs are trained indiscriminately on benign data from all patients, overlooking physiological differences that introduce noise, degrade robustness, and reduce recall. In this paper, we propose ROAST, a novel risk-aware outlier exposure selective training framework that improves AD recall without sacrificing precision. ROAST identifies patients who are less vulnerable to attack and focuses training on these cleaner, more reliable data, thereby reducing false negatives and improving recall. To preserve precision, the framework applies outlier exposure by injecting adversarial samples into the training set of the less vulnerable patients, avoiding noisy data from others. Experiments show that ROAST increases recall by 16.2\% while reducing the training time by 88.3\% on average compared to indiscriminate training, with minimal impact on precision.

ROAST: Risk-aware Outlier-exposure for Adversarial Selective Training of Anomaly Detectors Against Evasion Attacks

Abstract

Safety-critical domains like healthcare rely on deep neural networks (DNNs) for prediction, yet DNNs remain vulnerable to evasion attacks. Anomaly detectors (ADs) are widely used to protect DNNs, but conventional ADs are trained indiscriminately on benign data from all patients, overlooking physiological differences that introduce noise, degrade robustness, and reduce recall. In this paper, we propose ROAST, a novel risk-aware outlier exposure selective training framework that improves AD recall without sacrificing precision. ROAST identifies patients who are less vulnerable to attack and focuses training on these cleaner, more reliable data, thereby reducing false negatives and improving recall. To preserve precision, the framework applies outlier exposure by injecting adversarial samples into the training set of the less vulnerable patients, avoiding noisy data from others. Experiments show that ROAST increases recall by 16.2\% while reducing the training time by 88.3\% on average compared to indiscriminate training, with minimal impact on precision.

Paper Structure

This paper contains 25 sections, 7 equations, 10 figures, 5 tables.

Figures (10)

  • Figure 1: System architecture of the patient monitoring system, comprising a data acquisition device, smartphone, actuator, predictive DNN, and anomaly detector.
  • Figure 2: kNN anomaly detection on glucose traces from A_5 and A_2 (OhioT1DM), showing a higher FN rate for A_2 under indiscriminate training.
  • Figure 3: The five steps of our proposed ROAST technique.
  • Figure 4: Pedagogical Example of Anomaly Detector Training Strategies. We propose strategy (c) for ROAST.
  • Figure 5: Percentage of normal glucose misclassified as hyperglycemic in OhioT1DM under fasting and postprandial attacks.
  • ...and 5 more figures