Table of Contents
Fetching ...

FASA: a Flexible and Automatic Speech Aligner for Extracting High-quality Aligned Children Speech Data

Dancheng Liu, Jinjun Xiong

TL;DR

The paper tackles the challenge of limited high-quality aligned data for children's speech, which hampers pediatric ASR research. It introduces FASA, a flexible, automatic forced-alignment toolkit that operates under minimal transcription assumptions, leverages DL backbones such as WhisperX, and includes optional human-in-the-loop corrections and post-generation checks. Through application to CHILDES-derived datasets, FASA demonstrates substantially improved data quality, achieving up to 13.6× lower alignment errors compared with human annotations and outperforming traditional tools on noisy transcriptions. The work provides a practical pipeline and a new large-scale, aligned young-child speech resource, with code and data intended to accelerate progress in child-focused ASR and clinical applications.

Abstract

Automatic Speech Recognition (ASR) for adults' speeches has made significant progress by employing deep neural network (DNN) models recently, but improvement in children's speech is still unsatisfactory due to children's speech's distinct characteristics. DNN models pre-trained on adult data often struggle in generalizing children's speeches with fine tuning because of the lack of high-quality aligned children's speeches. When generating datasets, human annotations are not scalable, and existing forced-alignment tools are not usable as they make impractical assumptions about the quality of the input transcriptions. To address these challenges, we propose a new forced-alignment tool, FASA, as a flexible and automatic speech aligner to extract high-quality aligned children's speech data from many of the existing noisy children's speech data. We demonstrate its usage on the CHILDES dataset and show that FASA can improve data quality by 13.6$\times$ over human annotations.

FASA: a Flexible and Automatic Speech Aligner for Extracting High-quality Aligned Children Speech Data

TL;DR

The paper tackles the challenge of limited high-quality aligned data for children's speech, which hampers pediatric ASR research. It introduces FASA, a flexible, automatic forced-alignment toolkit that operates under minimal transcription assumptions, leverages DL backbones such as WhisperX, and includes optional human-in-the-loop corrections and post-generation checks. Through application to CHILDES-derived datasets, FASA demonstrates substantially improved data quality, achieving up to 13.6× lower alignment errors compared with human annotations and outperforming traditional tools on noisy transcriptions. The work provides a practical pipeline and a new large-scale, aligned young-child speech resource, with code and data intended to accelerate progress in child-focused ASR and clinical applications.

Abstract

Automatic Speech Recognition (ASR) for adults' speeches has made significant progress by employing deep neural network (DNN) models recently, but improvement in children's speech is still unsatisfactory due to children's speech's distinct characteristics. DNN models pre-trained on adult data often struggle in generalizing children's speeches with fine tuning because of the lack of high-quality aligned children's speeches. When generating datasets, human annotations are not scalable, and existing forced-alignment tools are not usable as they make impractical assumptions about the quality of the input transcriptions. To address these challenges, we propose a new forced-alignment tool, FASA, as a flexible and automatic speech aligner to extract high-quality aligned children's speech data from many of the existing noisy children's speech data. We demonstrate its usage on the CHILDES dataset and show that FASA can improve data quality by 13.6 over human annotations.

Paper Structure

This paper contains 12 sections, 1 equation, 1 figure, 2 tables.

Figures (1)

  • Figure 1: This figure illustrates the pipeline of FASA. The input is an audio file and a transcription. Module ① optionally cleans the input transcription; module ② segments and makes predictions on the audio; module ③ forced-aligns audio segments with the provided transcription using Algorithm 1; module ④ performs post-generation checking (PGC); and module ⑤ allows user to augment dataset via manual selections. The entire system besides module ⑤ is automatic.