Datasets:

ArXiv:
rassulya commited on
Commit
fac0c5b
·
verified ·
1 Parent(s): 6912dab

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +71 -5
README.md CHANGED
@@ -1,10 +1,76 @@
1
- ## Hugging Face Dataset Card: Kazakh_Speech_Corpus_2
2
 
3
- **Summary:**
4
 
5
- This dataset, Kazakh_Speech_Corpus_2, is intended for Automatic Speech Recognition (ASR) tasks in the Kazakh language. Unfortunately, due to a missing README file on the original repository, detailed information regarding the dataset's composition, size, and specific characteristics is unavailable. Further information is needed to fully describe the dataset's contents and intended use.
6
 
7
- **Repository:**
8
 
9
- https://github.com/IS2AI/ISSAI_SAIDA_Kazakh_ASR
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Kazakh Speech Corpus (KSC) Dataset Card
2
 
3
+ This dataset card describes the Kazakh Speech Corpus (KSC), a large-scale, open-source speech corpus for the Kazakh language.
4
 
5
+ ## Summary
6
 
7
+ The KSC contains approximately 332 hours of transcribed audio, comprising over 153,000 utterances spoken by participants from diverse regions, age groups, and genders. It's the largest publicly available Kazakh speech corpus, designed to advance speech and language processing applications for the Kazakh language, a low-resource language within the Turkic language family. The data was crowdsourced via a web-based platform, and rigorously checked by native Kazakh speakers to ensure high quality. Preliminary speech recognition experiments yielded promising results (2.8% character error rate and 8.7% word error rate on the test set). An ESPnet recipe for reproducible speech recognition experiments is also provided.
8
 
9
+ ## Dataset Statistics
10
 
11
+ | Category | Train | Valid | Test | Total |
12
+ |-----------------|-----------|-----------|-----------|------------|
13
+ | Duration (hours)| 318.4 | 7.1 | 7.1 | 332.6 |
14
+ | # Utterances | 147,236 | 3,283 | 3,334 | 153,853 |
15
+ | # Words | 1.61M | 35.2k | 35.8k | 1.68M |
16
+ | # Unique Words | 157,191 | 13,525 | 13,959 | 160,041 |
17
+ | # Device IDs | 1,554 | 29 | 29 | 1,612 |
18
+ | # Speakers | - | 29 | 29 | - |
19
+
20
+
21
+ ## Validation and Test Set Speaker Details
22
+
23
+ | Category | Valid (%) | Test (%) |
24
+ |-----------------|-----------|-----------|
25
+ | Gender (%) | | |
26
+ | Female | 51.7 | 51.7 |
27
+ | Male | 48.3 | 48.3 |
28
+ | Age (%) | | |
29
+ | 18-27 | 37.9 | 34.5 |
30
+ | 28-37 | 34.5 | 31.0 |
31
+ | 38-47 | 10.4 | 13.8 |
32
+ | 48 and above | 17.2 | 20.7 |
33
+ | Region (%) | | |
34
+ | East | 13.8 | 13.8 |
35
+ | West | 20.7 | 17.2 |
36
+ | North | 13.8 | 20.7 |
37
+ | South | 37.9 | 41.4 |
38
+ | Center | 13.8 | 6.9 |
39
+ | Device (%) | | |
40
+ | Phone | 62.1 | 79.3 |
41
+ | Computer | 37.9 | 20.7 |
42
+ | Headphone (%) | | |
43
+ | Yes | 20.7 | 17.2 |
44
+ | No | 79.3 | 82.8 |
45
+
46
+
47
+ ## Citations
48
+
49
+ * Dave, B. (2007). *Kazakhstan—ethnicity, language and power*. Routledge.
50
+ * Du, J., Na, X., Liu, X., & Bu, H. (2018). AISHELL-2: Transforming mandarin ASR research into industrial scale. *arXiv preprint arXiv:1808.10583*.
51
+ * Hannun, A. Y., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., ... & Ng, A. (2014). Deep speech: Scaling up end-to-end speech recognition. *arXiv preprint arXiv:1412.5567*.
52
+ * Koh, J. X., Mislan, A., Khoo, K., Ang, B., Ang, W., Ng, C., & Tan, Y. Y. (2019). Building the singapore english national speech corpus. In *INTERSPEECH 2019*.
53
+ * Kudo, T., & Richardson, J. (2018). SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations*.
54
+ * Makhambetov, O., Makazhanov, A., Yessenbayev, Z., Matkarimov, B., Sabyrgaliyev, I., & Sharafudinov, A. (2013). Assembling the kazakh language corpus. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*.
55
+ * Mamyrbayev, O., Alimhan, K., Zhumazhanov, B., Turdalykyzy, T., & Gusmanova, F. (2020). End-to-end speech recognition in agglutinative languages. In *ACIIDS 2020*.
56
+ * Mamyrbayev, O. J., Turdalyuly, M., Mekebayev, N., Alimhan, K., Kydyrbekova, A., & Turdalykyzy, T. (2019). Automatic recognition of kazakh speech using deep neural networks. In *ACIIDS 2019*.
57
+ * Povey, D., Cheng, G., Wang, Y., Li, K., Xu, H., Yarmohammadi, M., & Khudanpur, S. (2018). Semi-orthogonal low-rank matrix factorization for deep neural networks. In *INTERSPEECH 2018*.
58
+ * Povey, D., Peddinti, V., Galvez, D., Ghahremani, P., Manohar, V., Na, X., ... & Khudanpur, S. (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In *INTERSPEECH 2016*.
59
+ * Sainath, T. N., Prabhavalkar, R., Kumar, S., Lee, S., Kannan, A., Rybach, D., ... & Chiu, C. C. (2018). No need for a lexicon? Evaluating the value of the pronunciation lexica in end-to-end models. In *ICASSP 2018*.
60
+ * Sennrich, R., Haddow, B., & Birch, A. (2016). Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*.
61
+ * Shi, Y., Hamdullah, A., Tang, Z., Wang, D., & Zheng, T. F. (2017). A free Kazakh speech database and a speech recognition baseline. In *2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)*.
62
+ * Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In *International Conference on Learning Representations*.
63
+ * Snow, R., O’Connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In *Proceedings of EMNLP 2008*.
64
+ * Stolcke, A. (2002). SRILM—an extensible language modeling toolkit. In *Proceedings of the International Conference on Spoken Language Processing, ICSLP 2002*.
65
+ * Takamichi, S., & Saruwatari, H. (2018). CPJD corpus: Crowdsourced parallel speech corpus of Japanese dialects. In *Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018)*.
66
+ * Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In *Advances in neural information processing systems*.
67
+ * Watanabe, S., Hori, T., Karita, S., Hayashi, T., Nishitoba, J., Unno, Y., ... & Ochiai, T. (2018). ESPnet: End-to-end speech processing toolkit. In *INTERSPEECH 2018*.
68
+ * Yu, D., & Deng, L. (2014). *Automatic speech recognition: a deep learning approach*. Springer.
69
+ * Zhou, W., Michel, W., Irie, K., Kitza, M., Schlüter, R., & Ney, H. (2020). The RWTH ASR system for TED-LIUM Release 2: Improving Hybrid HMM with SpecAugment. In *ICASSP 2020*.
70
+
71
+
72
+ ## GitHub Repository
73
+
74
+ [https://github.com/IS2AI/ISSAI_SAIDA_Kazakh_ASR](https://github.com/IS2AI/ISSAI_SAIDA_Kazakh_ASR)
75
+
76
+ **(Note: README.md was unavailable at the provided link.)**