UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language Understanding
Dongyang Li, Taolin Zhang, Jiali Deng, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
TL;DR
Cross-lingual natural language understanding remains hampered by data augmentation that focuses on surface form rather than deep semantics. UniPSDA introduces a three-stage Domino Unsupervised Cluster to learn cross-lingual semantic relations and a pseudo semantic data augmentation mechanism that replaces key sentence constituents with cross-language equivalents, guided by an optimal transport affinity regularization to minimize misalignment. The approach achieves consistent improvements across sequence classification, information extraction and question answering in zero-shot settings, including notable gains on French in MLDoc and strong sentiment and extraction results on OpeNER and ACE2005. By avoiding reliance on parallel data for augmentation and leveraging context-aware multilingual semantics, UniPSDA offers a scalable, unsupervised path to enhance cross-lingual NLU.
Abstract
Cross-lingual representation learning transfers knowledge from resource-rich data to resource-scarce ones to improve the semantic understanding abilities of different languages. However, previous works rely on shallow unsupervised data generated by token surface matching, regardless of the global context-aware semantics of the surrounding text tokens. In this paper, we propose an Unsupervised Pseudo Semantic Data Augmentation (UniPSDA) mechanism for cross-lingual natural language understanding to enrich the training data without human interventions. Specifically, to retrieve the tokens with similar meanings for the semantic data augmentation across different languages, we propose a sequential clustering process in 3 stages: within a single language, across multiple languages of a language family, and across languages from multiple language families. Meanwhile, considering the multi-lingual knowledge infusion with context-aware semantics while alleviating computation burden, we directly replace the key constituents of the sentences with the above-learned multi-lingual family knowledge, viewed as pseudo-semantic. The infusion process is further optimized via three de-biasing techniques without introducing any neural parameters. Extensive experiments demonstrate that our model consistently improves the performance on general zero-shot cross-lingual natural language understanding tasks, including sequence classification, information extraction, and question answering.
