Table of Contents
Fetching ...

Beyond Cosine Similarity: Zero-Initialized Residual Complex Projection for Aspect-Based Sentiment Analysis

Yijin Wang, Fandi Sun

Abstract

Aspect-Based Sentiment Analysis (ABSA) is fundamentally challenged by representation entanglement, where aspect semantics and sentiment polarities are often conflated in real-valued embedding spaces. Furthermore, standard contrastive learning suffers from false-negative collisions, severely degrading performance on high-frequency aspects. In this paper, we propose a novel framework featuring a Zero-Initialized Residual Complex Projection (ZRCP) and an Anti-collision Masked Angle Loss,inspired by quantum projection and entanglement ideas. Our approach projects textual features into a complex semantic space, systematically utilizing the phase to disentangle sentiment polarities while allowing the amplitude to encode the semantic intensity and lexical richness of subjective descriptions. To tackle the collision bottleneck, we introduce an anti-collision mask that elegantly preserves intra-polarity aspect cohesion while expanding the inter-polarity discriminative margin by over 50%. Experimental results demonstrate that our framework achieves a state-of-the-art Macro-F1 score of 0.8851. Deep geometric analyses further reveal that explicitly penalizing the complex amplitude catastrophically over-regularizes subjective representations, proving that our unconstrained-amplitude and phase-driven objective is crucial for robust, fine-grained sentiment disentanglement.

Beyond Cosine Similarity: Zero-Initialized Residual Complex Projection for Aspect-Based Sentiment Analysis

Abstract

Aspect-Based Sentiment Analysis (ABSA) is fundamentally challenged by representation entanglement, where aspect semantics and sentiment polarities are often conflated in real-valued embedding spaces. Furthermore, standard contrastive learning suffers from false-negative collisions, severely degrading performance on high-frequency aspects. In this paper, we propose a novel framework featuring a Zero-Initialized Residual Complex Projection (ZRCP) and an Anti-collision Masked Angle Loss,inspired by quantum projection and entanglement ideas. Our approach projects textual features into a complex semantic space, systematically utilizing the phase to disentangle sentiment polarities while allowing the amplitude to encode the semantic intensity and lexical richness of subjective descriptions. To tackle the collision bottleneck, we introduce an anti-collision mask that elegantly preserves intra-polarity aspect cohesion while expanding the inter-polarity discriminative margin by over 50%. Experimental results demonstrate that our framework achieves a state-of-the-art Macro-F1 score of 0.8851. Deep geometric analyses further reveal that explicitly penalizing the complex amplitude catastrophically over-regularizes subjective representations, proving that our unconstrained-amplitude and phase-driven objective is crucial for robust, fine-grained sentiment disentanglement.

Paper Structure

This paper contains 25 sections, 6 equations, 5 figures, 3 tables.

Figures (5)

  • Figure 1: The overall architecture of our Phase-Driven Disentanglement framework. Textual inputs are encoded by a pre-trained backbone and projected into a complex space via the ZRCP module. The framework is optimized using a joint objective featuring an Anti-collision Masked Angle Loss to elegantly decouple objective aspects and subjective polarities.
  • Figure 2: Gradient Analysis of Fine-grained Contrastive Learning. (a) Modern pre-trained language models inherently initialize aspect-matched sentences into highly dense clusters ($\theta \to 0$). In this high-density zone, the cosine gradient ($|\sin\theta|$) catastrophically vanishes. (b) Our phase-driven angle objective maintains a constant gradient, providing the necessary driving force to disentangle hard negatives regardless of their initial proximity.
  • Figure 3: Model performance relative to RoBERTa across 18 fine-grained aspects. Bars represent the Macro-F1 score difference of each model (CoSENT, SimCSE, AnglE, Ours) compared to the RoBERTa baseline. Symbols above bars indicate the best-performing model for each aspect: $\bigstar$ for Ours, $\blacktriangle$ for CoSENT, $\blacksquare$ for SimCSE, and $\blacklozenge$ for AnglE. Our model achieves the highest F1 score on most aspects, demonstrating its superior disentanglement capability.
  • Figure 4: Similarity Matrix Analysis. (a) The baseline model without masks suffers from false negative collisions, resulting in a chaotic semantic space. (b) Our ZRCP+Mask framework delineates clear block-diagonal structures. (c) Model comparison on aggregated similarities shows that our model safely expands the inter-polarity discriminative margin ($-7.3\%$) with only a marginal trade-off in intra-polarity cohesion.
  • Figure 5: Deep geometric analysis of complex amplitudes demonstrating the decoupling of subjective intensity from physical text length. (a) The amplitude distribution (Kernel Density Estimation) of the Decoration aspect, contrasting our model's natural variance against the collapsed space caused by an explicit Amplitude Penalty (AmpLoss). (b) Min-Max scaled geometric mapping of the extreme amplitude vectors. It visually illustrates how intense subjective evaluation strongly activates the imaginary subspace (polarity), thereby stretching the overall complex magnitude. (c) Scatter plot verifying the geometric decoupling between the $L_2$ norm and physical text length. The near-zero Pearson correlation ($r=0.130$) statistically proves that the amplitude captures semantic subjectivity rather than trivial physical length. Highlighted star markers correspond to the specific case studies discussed in Section 5.6.