Table of Contents
Fetching ...

CAT: Interpretable Concept-based Taylor Additive Models

Viet Duong, Qiong Wu, Zhengyi Zhou, Hongjue Zhao, Chenxiang Luo, Eric Zavesky, Huaxiu Yao, Huajie Shao

TL;DR

CAT tackles deep-model interpretability by reframing explanations around high-level concepts learned from groups of features. It combines concept encoders with a white-box TaylorNet predictor and uses Tucker decomposition to keep the model compact, enabling non-linear predictions to be explained via polynomial interactions among concepts: $h = f\circ g$, with $\boldsymbol{z}=\boldsymbol{g}(\boldsymbol{X})$. Across six benchmarks, CAT achieves competitive or superior accuracy with fewer parameters and provides interpretable explanations through concept contributions and their higher-order interactions, demonstrating practical scalability without heavy domain labeling. The work advances interpretable ML by delivering concept-based explanations that align with human reasoning while maintaining strong predictive performance.

Abstract

As an emerging interpretable technique, Generalized Additive Models (GAMs) adopt neural networks to individually learn non-linear functions for each feature, which are then combined through a linear model for final predictions. Although GAMs can explain deep neural networks (DNNs) at the feature level, they require large numbers of model parameters and are prone to overfitting, making them hard to train and scale. Additionally, in real-world datasets with many features, the interpretability of feature-based explanations diminishes for humans. To tackle these issues, recent research has shifted towards concept-based interpretable methods. These approaches try to integrate concept learning as an intermediate step before making predictions, explaining the predictions in terms of human-understandable concepts. However, these methods require domain experts to extensively label concepts with relevant names and their ground-truth values. In response, we propose CAT, a novel interpretable Concept-bAsed Taylor additive model to simply this process. CAT does not have to require domain experts to annotate concepts and their ground-truth values. Instead, it only requires users to simply categorize input features into broad groups, which can be easily accomplished through a quick metadata review. Specifically, CAT first embeds each group of input features into one-dimensional high-level concept representation, and then feeds the concept representations into a new white-box Taylor Neural Network (TaylorNet). The TaylorNet aims to learn the non-linear relationship between the inputs and outputs using polynomials. Evaluation results across multiple benchmarks demonstrate that CAT can outperform or compete with the baselines while reducing the need of extensive model parameters. Importantly, it can explain model predictions through high-level concepts that human can understand.

CAT: Interpretable Concept-based Taylor Additive Models

TL;DR

CAT tackles deep-model interpretability by reframing explanations around high-level concepts learned from groups of features. It combines concept encoders with a white-box TaylorNet predictor and uses Tucker decomposition to keep the model compact, enabling non-linear predictions to be explained via polynomial interactions among concepts: , with . Across six benchmarks, CAT achieves competitive or superior accuracy with fewer parameters and provides interpretable explanations through concept contributions and their higher-order interactions, demonstrating practical scalability without heavy domain labeling. The work advances interpretable ML by delivering concept-based explanations that align with human reasoning while maintaining strong predictive performance.

Abstract

As an emerging interpretable technique, Generalized Additive Models (GAMs) adopt neural networks to individually learn non-linear functions for each feature, which are then combined through a linear model for final predictions. Although GAMs can explain deep neural networks (DNNs) at the feature level, they require large numbers of model parameters and are prone to overfitting, making them hard to train and scale. Additionally, in real-world datasets with many features, the interpretability of feature-based explanations diminishes for humans. To tackle these issues, recent research has shifted towards concept-based interpretable methods. These approaches try to integrate concept learning as an intermediate step before making predictions, explaining the predictions in terms of human-understandable concepts. However, these methods require domain experts to extensively label concepts with relevant names and their ground-truth values. In response, we propose CAT, a novel interpretable Concept-bAsed Taylor additive model to simply this process. CAT does not have to require domain experts to annotate concepts and their ground-truth values. Instead, it only requires users to simply categorize input features into broad groups, which can be easily accomplished through a quick metadata review. Specifically, CAT first embeds each group of input features into one-dimensional high-level concept representation, and then feeds the concept representations into a new white-box Taylor Neural Network (TaylorNet). The TaylorNet aims to learn the non-linear relationship between the inputs and outputs using polynomials. Evaluation results across multiple benchmarks demonstrate that CAT can outperform or compete with the baselines while reducing the need of extensive model parameters. Importantly, it can explain model predictions through high-level concepts that human can understand.

Paper Structure

This paper contains 22 sections, 13 equations, 7 figures, 7 tables.

Figures (7)

  • Figure 1: The Overall framework of CAT. It consists of two main components: concept encoders and Taylor Neural Networks (TaylorNet). Each concept encoder embeds a group of low-level features into a one-dimensional high-level concept representation. The TaylorNet is a white-box model that uses the high-level concept representations to make predictions.
  • Figure 2: Concept contributions using second-order CAT model for predicting listing price in the Airbnb dataset. Contributions are given by the standardized regression coefficients of the Taylor polynomial. We observe that the $Location$ and $Property$ descriptions influence the listing price the most.
  • Figure 3: Shape functions for the first-order concepts learned by the second-order CAT model on the Airbnb dataset. The x-axis represents the values of the concepts, while the y-axis indicates the contributions of each value to the listing price. The blue line represents the shape function for a concept. Pink bars represent the normalized data density for 25 bins of concept values.
  • Figure 4: Concept contributions using second-order Taylor for predicting gender in the CelebA dataset. Contributions are given by the standardized regression coefficients of the Taylor polynomial. We observe that the $Skin Tone$, $Hair Azimuth$, and $Hair Length$ concepts influence the gender prediction the most.
  • Figure 5: Shape functions for the first-order concepts learned by the second-order CAT model on the CelebA dataset. The x-axis represents the values of the concepts, while the y-axis indicates the contributions of each value to the prediction of a female person. The blue line represents the shape function for a concept. Pink bars represent the normalized data density for 25 bins of concept values.
  • ...and 2 more figures