Datasets:
language:
- bn
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_path
dtype: string
splits:
- name: train
num_bytes: 17672959
num_examples: 50
- name: test
num_bytes: 2345138893.961
num_examples: 6533
- name: validation
num_bytes: 2374606148.554
num_examples: 6594
download_size: 9276258873
dataset_size: 4737418001.515
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
tags:
- speech-recognition
- Bangladeshi Bangla
- Bengali
- speech-corpus
Dataset Card for SUBAK.KO
Table of Contents
- Dataset Card for SUBAK.KO
Dataset Description
- Homepage: https://doi.org/10.5281/zenodo.7068130
- Repository: [Needs More Information]
- Paper: Bangladeshi Bangla speech corpus for automatic speech recognition research
- Leaderboard: [Needs More Information]
- Point of Contact: AILAB
Dataset Summary
SUBAK.KO is a Bangladeshi standard Bangla annotated speech corpus for automatic speech recognition research. The corpus contains 241 hours of high quality speech data, including 229 hours of read speech data collected in an studio environment and 12 hours of broadcast speech data.
Supported Tasks and Leaderboards
This dataset is designed for the automatic speech recognition task. The associated paper provides the baseline results on SUBAK.KO corpus.
Languages
Bangladeshi standard Bangla
Dataset Structure
Data Instances
A typical data point comprises the path to the audio file, called path
and its transcription, called sentence
. Some additional information about the speaker and the passage which contains the transcription is provided.
{'speaker_id': 'VIVOSSPK01',
'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav',
'audio': {'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'sentence': 'KHÁCH SẠN'}
Data Fields
speaker_id: An id for which speaker (voice) made the recording
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column:
dataset[0]["audio"]
the audio file is automatically decoded and resampled todataset.features["audio"].sampling_rate
. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the"audio"
column, i.e.dataset[0]["audio"]
should always be preferred overdataset["audio"][0]
.sentence: The sentence the user was prompted to speak
Data Splits
The speech material has been subdivided into portions for train and test.
Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time.
Train | Test | |
---|---|---|
Speakers | 46 | 19 |
Utterances | 11660 | 760 |
Duration | 14:55 | 00:45 |
Unique Syllables | 4617 | 1692 |
Dataset Creation
Curation Rationale
[Needs More Information]
Source Data
Initial Data Collection and Normalization
[Needs More Information]
Who are the source language producers?
[Needs More Information]
Annotations
Annotation process
[Needs More Information]
Who are the annotators?
[Needs More Information]
Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
Additional Information
Dataset Curators
The dataset was initially prepared by AILAB, a computer science lab of VNUHCM - University of Science.
Licensing Information
Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 (CC BY-NC-SA 4.0)
Citation Information
@article{kibria2022bangladeshi,
title={Bangladeshi Bangla speech corpus for automatic speech recognition research},
author={Kibria, Shafkat and Samin, Ahnaf Mozib and Kobir, M Humayon and Rahman, M Shahidur and Selim, M Reza and Iqbal, M Zafar},
journal={Speech Communication},
volume={136},
pages={84--97},
year={2022},
publisher={Elsevier}
}