Table of Contents
Fetching ...

MoireMix: A Formula-Based Data Augmentation for Improving Image Classification Robustness

Yuto Matsuo, Yoshihiro Fukuhara, Yuki M. Asano, Rintaro Yanagi, Hirokatsu Kataoka, Akio Nakamura

Abstract

Data augmentation is a key technique for improving the robustness of image classification models. However, many recent approaches rely on diffusion-based synthesis or complex feature mixing strategies, which introduce substantial computational overhead or require external datasets. In this work, we explore a different direction: procedural augmentation based on analytic interference patterns. Unlike conventional augmentation methods that rely on stochastic noise, feature mixing, or generative models, our approach exploits Moire interference to generate structured perturbations spanning a wide range of spatial frequencies. We propose a lightweight augmentation method that procedurally generates Moire textures on-the-fly using a closed-form mathematical formulation. The patterns are synthesized directly in memory with negligible computational cost (0.0026 seconds per image), mixed with training images during training, and immediately discarded, enabling a storage-free augmentation pipeline without external data. Extensive experiments with Vision Transformers demonstrate that the proposed method consistently improves robustness across multiple benchmarks, including ImageNet-C, ImageNet-R, and adversarial benchmarks, outperforming standard augmentation baselines and existing external-data-free augmentation approaches. These results suggest that analytic interference patterns provide a practical and efficient alternative to data-driven generative augmentation methods.

MoireMix: A Formula-Based Data Augmentation for Improving Image Classification Robustness

Abstract

Data augmentation is a key technique for improving the robustness of image classification models. However, many recent approaches rely on diffusion-based synthesis or complex feature mixing strategies, which introduce substantial computational overhead or require external datasets. In this work, we explore a different direction: procedural augmentation based on analytic interference patterns. Unlike conventional augmentation methods that rely on stochastic noise, feature mixing, or generative models, our approach exploits Moire interference to generate structured perturbations spanning a wide range of spatial frequencies. We propose a lightweight augmentation method that procedurally generates Moire textures on-the-fly using a closed-form mathematical formulation. The patterns are synthesized directly in memory with negligible computational cost (0.0026 seconds per image), mixed with training images during training, and immediately discarded, enabling a storage-free augmentation pipeline without external data. Extensive experiments with Vision Transformers demonstrate that the proposed method consistently improves robustness across multiple benchmarks, including ImageNet-C, ImageNet-R, and adversarial benchmarks, outperforming standard augmentation baselines and existing external-data-free augmentation approaches. These results suggest that analytic interference patterns provide a practical and efficient alternative to data-driven generative augmentation methods.

Paper Structure

This paper contains 19 sections, 2 equations, 4 figures, 4 tables, 1 algorithm.

Figures (4)

  • Figure 1: Overview of our proposed MoireMix data augmentation framework. The method utilizes mathematical formulas to procedurally generate complex interference patterns on-the-fly during training.
  • Figure 2: Learning dynamics of the ViT-Base model on ImageNet-1k across 100 training epochs. We compare Train/Validation Loss and Top-1 Accuracy between traditional spatial augmentations (dashed lines) and texture-blending approaches (solid lines). MoireMix (thick blue line) provides a strong regularization effect, characterized by slower initial convergence but leading to a stable and robust final model state.
  • Figure 3: Visual examples of procedurally generated texture images utilized as mixing sets in our on-the-fly augmentation pipeline. These specific examples illustrate the augmentation results obtained through a single addition operation. By applying the blending process once, the pipeline efficiently introduces diverse structural perturbations without external dependencies, storage overhead, or computational bottlenecks during training.
  • Figure 4: Fourier sensitivity heatmaps visualizing robustness across spatial frequencies and orientations. Each pixel represents the classification error under specific Fourier basis noise. The center indicates low frequencies, while the periphery represents high frequencies. MoireMix exhibits a more generalized reduction in error across the spectrum compared to AFA-Mix and StripeMix, which show localized robustness likely due to evaluation bias.