Table of Contents
Fetching ...

Predefined Prototypes for Intra-Class Separation and Disentanglement

Antonio Almudévar, Théo Mariotte, Alfonso Ortega, Marie Tahon, Luis Vicente, Antonio Miguel, Eduardo Lleida

TL;DR

This work proposes to predefine prototypes following human-specified criteria, which simplify the training pipeline and brings different advantages, and explores two of these advantages: increasing the inter-class separability of embeddings and disentangling embeddings with respect to different variance factors.

Abstract

Prototypical Learning is based on the idea that there is a point (which we call prototype) around which the embeddings of a class are clustered. It has shown promising results in scenarios with little labeled data or to design explainable models. Typically, prototypes are either defined as the average of the embeddings of a class or are designed to be trainable. In this work, we propose to predefine prototypes following human-specified criteria, which simplify the training pipeline and brings different advantages. Specifically, in this work we explore two of these advantages: increasing the inter-class separability of embeddings and disentangling embeddings with respect to different variance factors, which can translate into the possibility of having explainable predictions. Finally, we propose different experiments that help to understand our proposal and demonstrate empirically the mentioned advantages.

Predefined Prototypes for Intra-Class Separation and Disentanglement

TL;DR

This work proposes to predefine prototypes following human-specified criteria, which simplify the training pipeline and brings different advantages, and explores two of these advantages: increasing the inter-class separability of embeddings and disentangling embeddings with respect to different variance factors.

Abstract

Prototypical Learning is based on the idea that there is a point (which we call prototype) around which the embeddings of a class are clustered. It has shown promising results in scenarios with little labeled data or to design explainable models. Typically, prototypes are either defined as the average of the embeddings of a class or are designed to be trainable. In this work, we propose to predefine prototypes following human-specified criteria, which simplify the training pipeline and brings different advantages. Specifically, in this work we explore two of these advantages: increasing the inter-class separability of embeddings and disentangling embeddings with respect to different variance factors, which can translate into the possibility of having explainable predictions. Finally, we propose different experiments that help to understand our proposal and demonstrate empirically the mentioned advantages.

Paper Structure

This paper contains 15 sections, 3 equations, 2 figures, 1 table, 1 algorithm.

Figures (2)

  • Figure 1: Joint probabilities of each emotion and acoustic parameter (by levels) in the training dataset.
  • Figure 2: Relevance matrices $\Gamma^{(i)}$ for different embeddings $z^{(i)}$ and their corresponding predictions $\tilde{y}^{(i)}$