Datasets:

ArXiv:

The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Multilingual End-to-End Automatic Speech Recognition for Kazakh, Russian, and English

This dataset accompanies the research paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" (https://arxiv.org/abs/2108.01280), focusing on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The work explores the development of a multilingual E2E ASR system based on Transformer networks, comparing two output grapheme set construction methods (combined and independent). The impact of language models (LMs) and data augmentation techniques on recognition performance is also evaluated. The repository includes the training recipes, datasets, and pre-trained models. The multilingual models achieve performance comparable to monolingual baselines, with best monolingual and multilingual models achieving 20.9% and 20.5% average word error rates, respectively, on the combined test set.

Dataset Information

This dataset contains audio recordings and corresponding transcriptions in Kazakh, Russian, and English, used for training and evaluating the multilingual E2E ASR model. The data sources include the KSC corpus (https://issai.nu.edu.kz/kz-speech-corpus/), OpenSTT, and CV datasets (https://issai.nu.edu.kz/multilingual-asr/). The specific composition and splits of the datasets are detailed in the associated research paper.

Model Information

Pre-trained models are available for monolingual (Kazakh, Russian, English) and multilingual (combined and independent grapheme sets) ASR tasks. These models are based on Transformer networks and trained with varying data augmentation techniques (Speed Perturbation and SpecAugment). The pre-trained models are provided with different configurations allowing for a range of performance-efficiency trade-offs.

Citation

Please cite the associated research paper when using this dataset and the provided pre-trained models. (Citation details omitted as per instructions)

Contact Information

For any questions or issues, please contact the authors of the associated research paper. (Contact information omitted as per instructions)

Downloads last month
3