Table of Contents
Fetching ...

Mitigating Noisy Supervision Using Synthetic Samples with Soft Labels

Yangdi Lu, Wenbo He

TL;DR

This paper proposes a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels, and proposes a mixing strategy to create the synthetic samples by aggregating original samples with their top-K nearest neighbours, wherein the weights are calculated using a mixture model learning from the per-sample loss distribution.

Abstract

Noisy labels are ubiquitous in real-world datasets, especially in the large-scale ones derived from crowdsourcing and web searching. It is challenging to train deep neural networks with noisy datasets since the networks are prone to overfitting the noisy labels during training, resulting in poor generalization performance. During an early learning phase, deep neural networks have been observed to fit the clean samples before memorizing the mislabeled samples. In this paper, we dig deeper into the representation distributions in the early learning phase and find that, regardless of their noisy labels, learned representations of images from the same category still congregate together. Inspired by it, we propose a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels. Specifically, we propose a mixing strategy to create the synthetic samples by aggregating original samples with their top-K nearest neighbours, wherein the weights are calculated using a mixture model learning from the per-sample loss distribution. To enhance the performance in the presence of extreme label noise, we estimate the soft targets by gradually correcting the noisy labels. Furthermore, we demonstrate that the estimated soft targets yield a more accurate approximation to ground truth labels and the proposed method produces a superior quality of learned representations with more separated and clearly bounded clusters. The extensive experiments in two benchmarks (CIFAR-10 and CIFAR-100) and two larg-scale real-world datasets (Clothing1M and Webvision) demonstrate that our approach outperforms the state-of-the-art methods and robustness of the learned representation.

Mitigating Noisy Supervision Using Synthetic Samples with Soft Labels

TL;DR

This paper proposes a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels, and proposes a mixing strategy to create the synthetic samples by aggregating original samples with their top-K nearest neighbours, wherein the weights are calculated using a mixture model learning from the per-sample loss distribution.

Abstract

Noisy labels are ubiquitous in real-world datasets, especially in the large-scale ones derived from crowdsourcing and web searching. It is challenging to train deep neural networks with noisy datasets since the networks are prone to overfitting the noisy labels during training, resulting in poor generalization performance. During an early learning phase, deep neural networks have been observed to fit the clean samples before memorizing the mislabeled samples. In this paper, we dig deeper into the representation distributions in the early learning phase and find that, regardless of their noisy labels, learned representations of images from the same category still congregate together. Inspired by it, we propose a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels. Specifically, we propose a mixing strategy to create the synthetic samples by aggregating original samples with their top-K nearest neighbours, wherein the weights are calculated using a mixture model learning from the per-sample loss distribution. To enhance the performance in the presence of extreme label noise, we estimate the soft targets by gradually correcting the noisy labels. Furthermore, we demonstrate that the estimated soft targets yield a more accurate approximation to ground truth labels and the proposed method produces a superior quality of learned representations with more separated and clearly bounded clusters. The extensive experiments in two benchmarks (CIFAR-10 and CIFAR-100) and two larg-scale real-world datasets (Clothing1M and Webvision) demonstrate that our approach outperforms the state-of-the-art methods and robustness of the learned representation.

Paper Structure

This paper contains 22 sections, 12 equations, 10 figures, 4 tables.

Figures (10)

  • Figure 1: The Synthetic samples are generated by mixing itself with its nearest neighbours based on the representations.
  • Figure 2: We train ResNet34 he2016deep on CIFAR-10 dataset with 60% symmetric label noise using CE loss. (a). The train and test accuracy vs. the number of training epochs. (b). The gradient coefficient $\bm{p}_{i}-\bm{\hat{y}}_{i}$ of clean and mislabeled samples vs. the number of training epochs.
  • Figure 3: The proposed method MixNN consists of three parts. Part 1: Based on the learned representations from the penultimate layer, we calculate each training sample's approximate $K$-nearest neighbours by using Hierarchical Navigable Small World (HNSW) graph. Part 2: We aggregate the original sample with its $K$-nearest neighbours by using the dynamic weights estimated from a Gaussian Mixture Model that learned on per-sample loss distribution. Part 3: We gradually correct the noisy labels through an exponential moving average strategy.
  • Figure 4: Train on CIFAR-10 with 40% and 80% label noise after 10 epochs with cross-entropy loss. Plots (a) and (c): The ground truth normalized loss distribution. Plots (b) and (d): The pdf of mixture model and two components after fitting a two-component GMM to per-sample loss distribution.
  • Figure 5: MixNN pseudocode.
  • ...and 5 more figures