Table of Contents
Fetching ...

Fair Differentiable Neural Network Architecture Search for Long-Tailed Data with Self-Supervised Learning

Jiaming Yan

TL;DR

SSF-NAS is explored, which integrates the self-supervised learning and fair differentiable NAS to making NAS achieve better performance on long-tailed datasets, and conducts a series of experiments on the CIFAR10-LT dataset for performance evaluation.

Abstract

Recent advancements in artificial intelligence (AI) have positioned deep learning (DL) as a pivotal technology in fields like computer vision, data mining, and natural language processing. A critical factor in DL performance is the selection of neural network architecture. Traditional predefined architectures often fail to adapt to different data distributions, making it challenging to achieve optimal performance. Neural architecture search (NAS) offers a solution by automatically designing architectures tailored to specific datasets. However, the effectiveness of NAS diminishes on long-tailed datasets, where a few classes have abundant samples, and many have few, leading to biased models.In this paper, we explore to improve the searching and training performance of NAS on long-tailed datasets. Specifically, we first discuss the related works about NAS and the deep learning method for long-tailed datasets.Then, we focus on an existing work, called SSF-NAS, which integrates the self-supervised learning and fair differentiable NAS to making NAS achieve better performance on long-tailed datasets.An detailed description about the fundamental techniques for SSF-NAS is provided in this paper, including DARTS, FairDARTS, and Barlow Twins. Finally, we conducted a series of experiments on the CIFAR10-LT dataset for performance evaluation, where the results are align with our expectation.

Fair Differentiable Neural Network Architecture Search for Long-Tailed Data with Self-Supervised Learning

TL;DR

SSF-NAS is explored, which integrates the self-supervised learning and fair differentiable NAS to making NAS achieve better performance on long-tailed datasets, and conducts a series of experiments on the CIFAR10-LT dataset for performance evaluation.

Abstract

Recent advancements in artificial intelligence (AI) have positioned deep learning (DL) as a pivotal technology in fields like computer vision, data mining, and natural language processing. A critical factor in DL performance is the selection of neural network architecture. Traditional predefined architectures often fail to adapt to different data distributions, making it challenging to achieve optimal performance. Neural architecture search (NAS) offers a solution by automatically designing architectures tailored to specific datasets. However, the effectiveness of NAS diminishes on long-tailed datasets, where a few classes have abundant samples, and many have few, leading to biased models.In this paper, we explore to improve the searching and training performance of NAS on long-tailed datasets. Specifically, we first discuss the related works about NAS and the deep learning method for long-tailed datasets.Then, we focus on an existing work, called SSF-NAS, which integrates the self-supervised learning and fair differentiable NAS to making NAS achieve better performance on long-tailed datasets.An detailed description about the fundamental techniques for SSF-NAS is provided in this paper, including DARTS, FairDARTS, and Barlow Twins. Finally, we conducted a series of experiments on the CIFAR10-LT dataset for performance evaluation, where the results are align with our expectation.

Paper Structure

This paper contains 17 sections, 9 equations, 9 figures, 1 table.

Figures (9)

  • Figure 1: The Label Distribution of a Long-Tailed Dataset zhang2023deep.
  • Figure 2: The Process of NAS.
  • Figure 3: The Illustration of Supernet in DARTS he2021fednas.
  • Figure 4: The Process of Searching a Cell's Architecture in DARTS liu2018darts.
  • Figure 5: The Depiction of Barlow Twins.
  • ...and 4 more figures