Table of Contents
Fetching ...

SC-MoE: Switch Conformer Mixture of Experts for Unified Streaming and Non-streaming Code-Switching ASR

Shuaishuai Ye, Shunfei Chen, Xinhui Hu, Xinkang Xu

TL;DR

Experimental results show that the SC-MoE significantly improves CS ASR performances over baseline with comparable computational efficiency and introduces routers into every MoE layer of the encoder and the decoder and achieve better recognition performance.

Abstract

In this work, we propose a Switch-Conformer-based MoE system named SC-MoE for unified streaming and non-streaming code-switching (CS) automatic speech recognition (ASR), where we design a streaming MoE layer consisting of three language experts, which correspond to Mandarin, English, and blank, respectively, and equipped with a language identification (LID) network with a Connectionist Temporal Classification (CTC) loss as a router in the encoder of SC-MoE to achieve a real-time streaming CS ASR system. To further utilize the language information embedded in text, we also incorporate MoE layers into the decoder of SC-MoE. In addition, we introduce routers into every MoE layer of the encoder and the decoder and achieve better recognition performance. Experimental results show that the SC-MoE significantly improves CS ASR performances over baseline with comparable computational efficiency.

SC-MoE: Switch Conformer Mixture of Experts for Unified Streaming and Non-streaming Code-Switching ASR

TL;DR

Experimental results show that the SC-MoE significantly improves CS ASR performances over baseline with comparable computational efficiency and introduces routers into every MoE layer of the encoder and the decoder and achieve better recognition performance.

Abstract

In this work, we propose a Switch-Conformer-based MoE system named SC-MoE for unified streaming and non-streaming code-switching (CS) automatic speech recognition (ASR), where we design a streaming MoE layer consisting of three language experts, which correspond to Mandarin, English, and blank, respectively, and equipped with a language identification (LID) network with a Connectionist Temporal Classification (CTC) loss as a router in the encoder of SC-MoE to achieve a real-time streaming CS ASR system. To further utilize the language information embedded in text, we also incorporate MoE layers into the decoder of SC-MoE. In addition, we introduce routers into every MoE layer of the encoder and the decoder and achieve better recognition performance. Experimental results show that the SC-MoE significantly improves CS ASR performances over baseline with comparable computational efficiency.

Paper Structure

This paper contains 16 sections, 9 equations, 2 figures, 1 table.

Figures (2)

  • Figure 1: The architecture of the proposed SC-MoE. m, h, k, g represent the number of different network layers.
  • Figure 2: (a) A Switch Conformer encoder layer of the proposed SC-MoE. (b) A Switch Transformer decoder layer of the SC-MoE. MA, EN and BK represent Mandarin, English and blank respectively. For simplicity, all skipping connection lines have been omitted.