Table of Contents
Fetching ...

Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales

Bo Pang, Lillian Lee

TL;DR

The paper tackles rating-inference on multi-point sentiment scales by comparing three algorithmic families: standard one-vs-all SVMs, regression-based approaches, and a novel metric-labeling framework that explicitly leverages label and item similarities. It introduces a PSP-based item similarity to better capture alignment between text and rating, and demonstrates that metric labeling, especially with PSP, yields significant accuracy gains over traditional multi-class and regression methods on movie-review data. The work highlights the importance of matching similarity metrics to label structure and points to transduction and ordinal extensions as promising future directions. Overall, it provides a practical approach to extracting fine-grained sentiment scores from text with potential applicability to diverse scale-based text classifications.

Abstract

We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star". We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.

Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales

TL;DR

The paper tackles rating-inference on multi-point sentiment scales by comparing three algorithmic families: standard one-vs-all SVMs, regression-based approaches, and a novel metric-labeling framework that explicitly leverages label and item similarities. It introduces a PSP-based item similarity to better capture alignment between text and rating, and demonstrates that metric labeling, especially with PSP, yields significant accuracy gains over traditional multi-class and regression methods on movie-review data. The work highlights the importance of matching similarity metrics to label structure and points to transduction and ordinal extensions as promising future directions. Overall, it provides a practical approach to extracting fine-grained sentiment scores from text with potential applicability to diverse scale-based text classifications.

Abstract

We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star". We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.

Paper Structure

This paper contains 15 sections, 1 equation, 2 figures, 2 tables.

Figures (2)

  • Figure 1: Average and standard deviation of PSP for reviews expressing different ratings.
  • Figure 2: Results for main experimental comparisons.