Datasets:
Languages:
English
ArXiv:
Tags:
query-by-example-spoken-term-detection
audio-slot-filling
speaker-diarization
automatic-speaker-verification
License:
Update files from the datasets library (from 1.12.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.12.0
- README.md +85 -15
- dataset_infos.json +1 -1
- dummy/ks/1.9.0/dummy_data.zip +3 -0
- dummy/sd/1.9.0/dummy_data.zip +3 -0
- superb.py +310 -27
README.md
CHANGED
@@ -15,6 +15,8 @@ size_categories:
|
|
15 |
source_datasets:
|
16 |
- original
|
17 |
- extended|librispeech_asr
|
|
|
|
|
18 |
task_categories:
|
19 |
- speech-processing
|
20 |
task_ids:
|
@@ -106,7 +108,46 @@ Automatic Speaker Verification (ASV) verifies whether the speakers of a pair of
|
|
106 |
|
107 |
#### sd
|
108 |
|
109 |
-
Speaker Diarization (SD) predicts who is speaking when for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
|
111 |
#### er
|
112 |
|
@@ -130,7 +171,7 @@ The language data in SUPERB is in English (BCP-47 `en`)
|
|
130 |
|
131 |
An example from each split looks like:
|
132 |
|
133 |
-
```
|
134 |
{'chapter_id': 1240,
|
135 |
'file': 'path/to/file.flac',
|
136 |
'id': '103-1240-0000',
|
@@ -143,8 +184,14 @@ An example from each split looks like:
|
|
143 |
|
144 |
#### ks
|
145 |
|
146 |
-
|
147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
148 |
|
149 |
#### qbe
|
150 |
|
@@ -173,7 +220,19 @@ An example from each split looks like:
|
|
173 |
|
174 |
#### sd
|
175 |
|
176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
177 |
|
178 |
|
179 |
#### er
|
@@ -194,14 +253,14 @@ An example from each split looks like:
|
|
194 |
|
195 |
- `file`: a `string` feature.
|
196 |
- `text`: a `string` feature.
|
197 |
-
- `speaker_id`: a `int64` feature
|
198 |
-
- `chapter_id`: a `int64` feature
|
199 |
-
- `id`: a `string` feature
|
200 |
|
201 |
#### ks
|
202 |
|
203 |
-
|
204 |
-
|
205 |
|
206 |
#### qbe
|
207 |
|
@@ -230,8 +289,15 @@ An example from each split looks like:
|
|
230 |
|
231 |
#### sd
|
232 |
|
233 |
-
|
234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
235 |
|
236 |
#### er
|
237 |
|
@@ -252,8 +318,9 @@ An example from each split looks like:
|
|
252 |
|
253 |
#### ks
|
254 |
|
255 |
-
|
256 |
-
|
|
|
257 |
|
258 |
#### qbe
|
259 |
|
@@ -282,8 +349,11 @@ An example from each split looks like:
|
|
282 |
|
283 |
#### sd
|
284 |
|
285 |
-
|
286 |
|
|
|
|
|
|
|
287 |
|
288 |
#### er
|
289 |
|
@@ -385,4 +455,4 @@ the correct citation for each contained dataset.
|
|
385 |
|
386 |
### Contributions
|
387 |
|
388 |
-
Thanks to [@lewtun](https://github.com/lewtun)
|
|
|
15 |
source_datasets:
|
16 |
- original
|
17 |
- extended|librispeech_asr
|
18 |
+
- extended|other-librimix
|
19 |
+
- extended|other-speech_commands
|
20 |
task_categories:
|
21 |
- speech-processing
|
22 |
task_ids:
|
|
|
108 |
|
109 |
#### sd
|
110 |
|
111 |
+
Speaker Diarization (SD) predicts *who is speaking when* for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER).
|
112 |
+
|
113 |
+
##### Example of usage
|
114 |
+
|
115 |
+
Use these auxiliary functions to:
|
116 |
+
- load the audio file into an audio data array
|
117 |
+
- generate the label array
|
118 |
+
|
119 |
+
```python
|
120 |
+
def load_audio_file(example, frame_shift=160):
|
121 |
+
import soundfile as sf
|
122 |
+
|
123 |
+
example["array"], example["sample_rate"] = sf.read(
|
124 |
+
example["file"], start=example["start"] * frame_shift, stop=example["end"] * frame_shift
|
125 |
+
)
|
126 |
+
return example
|
127 |
+
|
128 |
+
|
129 |
+
def generate_label(example, frame_shift=160, num_speakers=2, rate=16000):
|
130 |
+
import numpy as np
|
131 |
+
|
132 |
+
start = example["start"]
|
133 |
+
end = example["end"]
|
134 |
+
frame_num = end - start
|
135 |
+
speakers = sorted({speaker["speaker_id"] for speaker in example["speakers"]})
|
136 |
+
label = np.zeros((frame_num, num_speakers), dtype=np.int32)
|
137 |
+
for speaker in example["speakers"]:
|
138 |
+
speaker_index = speakers.index(speaker["speaker_id"])
|
139 |
+
start_frame = np.rint(speaker["start"] * rate / frame_shift).astype(int)
|
140 |
+
end_frame = np.rint(speaker["end"] * rate / frame_shift).astype(int)
|
141 |
+
rel_start = rel_end = None
|
142 |
+
if start <= start_frame < end:
|
143 |
+
rel_start = start_frame - start
|
144 |
+
if start < end_frame <= end:
|
145 |
+
rel_end = end_frame - start
|
146 |
+
if rel_start is not None or rel_end is not None:
|
147 |
+
label[rel_start:rel_end, speaker_index] = 1
|
148 |
+
example["label"] = label
|
149 |
+
return example
|
150 |
+
```
|
151 |
|
152 |
#### er
|
153 |
|
|
|
171 |
|
172 |
An example from each split looks like:
|
173 |
|
174 |
+
```python
|
175 |
{'chapter_id': 1240,
|
176 |
'file': 'path/to/file.flac',
|
177 |
'id': '103-1240-0000',
|
|
|
184 |
|
185 |
#### ks
|
186 |
|
187 |
+
An example from each split looks like:
|
188 |
|
189 |
+
```python
|
190 |
+
{
|
191 |
+
'file': '/path/yes/af7a8296_nohash_1.wav',
|
192 |
+
'label': 'yes'
|
193 |
+
}
|
194 |
+
```
|
195 |
|
196 |
#### qbe
|
197 |
|
|
|
220 |
|
221 |
#### sd
|
222 |
|
223 |
+
An example from each split looks like:
|
224 |
+
```python
|
225 |
+
{
|
226 |
+
'record_id': '1578-6379-0038_6415-111615-0009',
|
227 |
+
'file': 'path/to/file.wav',
|
228 |
+
'start': 0,
|
229 |
+
'end': 1590,
|
230 |
+
'speakers': [
|
231 |
+
{'speaker_id': '1578', 'start': 28, 'end': 657},
|
232 |
+
{'speaker_id': '6415', 'start': 28, 'end': 1576}
|
233 |
+
]
|
234 |
+
}
|
235 |
+
```
|
236 |
|
237 |
|
238 |
#### er
|
|
|
253 |
|
254 |
- `file`: a `string` feature.
|
255 |
- `text`: a `string` feature.
|
256 |
+
- `speaker_id`: a `int64` feature.
|
257 |
+
- `chapter_id`: a `int64` feature.
|
258 |
+
- `id`: a `string` feature.
|
259 |
|
260 |
#### ks
|
261 |
|
262 |
+
- `file` (`string`): Path to the WAV audio file.
|
263 |
+
- `label` (`string`): Label of the spoken command.
|
264 |
|
265 |
#### qbe
|
266 |
|
|
|
289 |
|
290 |
#### sd
|
291 |
|
292 |
+
The data fields in all splits are:
|
293 |
+
- `record_id` (`string`): ID of the record.
|
294 |
+
- `file` (`string`): Path to the WAV audio file.
|
295 |
+
- `start` (`integer`): Start frame of the audio.
|
296 |
+
- `end` (`integer`): End frame of the audio.
|
297 |
+
- `speakers` (`list` of `dict`): List of speakers in the audio. Each item contains the fields:
|
298 |
+
- `speaker_id` (`string`): ID of the speaker.
|
299 |
+
- `start` (`integer`): Frame when the speaker starts speaking.
|
300 |
+
- `end` (`integer`): Frame when the speaker stops speaking.
|
301 |
|
302 |
#### er
|
303 |
|
|
|
318 |
|
319 |
#### ks
|
320 |
|
321 |
+
| | train | validation | test |
|
322 |
+
|----|------:|-----------:|-----:|
|
323 |
+
| ks | 51094 | 6798 | 3081 |
|
324 |
|
325 |
#### qbe
|
326 |
|
|
|
349 |
|
350 |
#### sd
|
351 |
|
352 |
+
The data is split into "train", "dev" and "test" sets, each containing the following number of examples:
|
353 |
|
354 |
+
| | train | dev | test |
|
355 |
+
|----|------:|-----:|-----:|
|
356 |
+
| sd | 13901 | 3014 | 3002 |
|
357 |
|
358 |
#### er
|
359 |
|
|
|
455 |
|
456 |
### Contributions
|
457 |
|
458 |
+
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) and [@anton-l](https://github.com/anton-l) for adding this dataset.
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"asr": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "http://www.openslr.org/12", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "int64", "id": null, "_type": "Value"}, "chapter_id": {"dtype": "int64", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_file_path_column": "file", "transcription_column": "text"}], "builder_name": "superb", "config_name": "asr", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11823891, "num_examples": 28539, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 894510, "num_examples": 2703, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 868614, "num_examples": 2620, "dataset_name": "superb"}}, "download_checksums": {"http://www.openslr.org/resources/12/dev-clean.tar.gz": {"num_bytes": 337926286, "checksum": "76f87d090650617fca0cac8f88b9416e0ebf80350acb97b343a85fa903728ab3"}, "http://www.openslr.org/resources/12/test-clean.tar.gz": {"num_bytes": 346663984, "checksum": "39fde525e59672dc6d1551919b1478f724438a95aa55f874b576be21967e6c23"}, "http://www.openslr.org/resources/12/train-clean-100.tar.gz": {"num_bytes": 6387309499, "checksum": "d4ddd1d5a6ab303066f14971d768ee43278a5f2a0aa43dc716b0e64ecbbbf6e2"}}, "download_size": 7071899769, "post_processing_size": null, "dataset_size": 13587015, "size_in_bytes": 7085486784}}
|
|
|
1 |
+
{"asr": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "http://www.openslr.org/12", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "int64", "id": null, "_type": "Value"}, "chapter_id": {"dtype": "int64", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_file_path_column": "file", "transcription_column": "text"}], "builder_name": "superb", "config_name": "asr", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11852430, "num_examples": 28539, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 897213, "num_examples": 2703, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 871234, "num_examples": 2620, "dataset_name": "superb"}}, "download_checksums": {"http://www.openslr.org/resources/12/dev-clean.tar.gz": {"num_bytes": 337926286, "checksum": "76f87d090650617fca0cac8f88b9416e0ebf80350acb97b343a85fa903728ab3"}, "http://www.openslr.org/resources/12/test-clean.tar.gz": {"num_bytes": 346663984, "checksum": "39fde525e59672dc6d1551919b1478f724438a95aa55f874b576be21967e6c23"}, "http://www.openslr.org/resources/12/train-clean-100.tar.gz": {"num_bytes": 6387309499, "checksum": "d4ddd1d5a6ab303066f14971d768ee43278a5f2a0aa43dc716b0e64ecbbbf6e2"}}, "download_size": 7071899769, "post_processing_size": null, "dataset_size": 13620877, "size_in_bytes": 7085520646}, "sd": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/ftshijt/LibriMix", "license": "", "features": {"record_id": {"dtype": "string", "id": null, "_type": "Value"}, "file": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}, "speakers": [{"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "superb", "config_name": "sd", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4622013, "num_examples": 13901, "dataset_name": "superb"}, "dev": {"name": "dev", "num_bytes": 860472, "num_examples": 3014, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 847803, "num_examples": 3002, "dataset_name": "superb"}}, "download_checksums": {"https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/reco2dur": {"num_bytes": 540906, "checksum": "879dca4b1108c93bd86df879463fca15a4de42a0f95a7e6987138dc6029b5554"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/segments": {"num_bytes": 5723993, "checksum": "f19cb0ecc342f8d2cd855118879a111822d7cf55fcd078ef156f5147233a8e11"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/utt2spk": {"num_bytes": 3165995, "checksum": "a4295726caf05d72f5ad24706180b9dbccffe6c0c2fc0128ca4b02b7b828a28a"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/wav.zip": {"num_bytes": 5706733518, "checksum": "4231070427ffbc9b3bddae874dba32f3985a0db0b0feb4dfa29ed4d1d11bf41b"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/reco2dur": {"num_bytes": 115918, "checksum": "a30fd59ad01db0315a82cad7a64baea009e6c2bcdfb6b2501bc8873ede72de06"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/segments": {"num_bytes": 673006, "checksum": "2b977917e7ab9feec03afb4fd6a4662df90e48dbcc42977a4b9c89c8d40432ee"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/utt2spk": {"num_bytes": 374794, "checksum": "9f47a7bed76e7a03e57d66ba9cc5f57d85d91f748d0b1eb20301d09e6c24cd20"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/wav.zip": {"num_bytes": 765594100, "checksum": "e28b3422ce59e2a5273be924e6ed6b8f115c0983db1997e56441973c27ee1cd8"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/reco2dur": {"num_bytes": 113357, "checksum": "6e013d917015031e2f1383871b52dfc1122e7b16cdee53bd8e5e0a7fbc57e406"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/segments": {"num_bytes": 650742, "checksum": "92f8de0f56c55a34e9111542c24ea13f2d2efaf9ebe64af31250cadab020f987"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/utt2spk": {"num_bytes": 361548, "checksum": "19dcb558aa886f0d553d8d9b8735ea1998b83e96d5245e5511cb732c84625ffd"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/wav.zip": {"num_bytes": 706322334, "checksum": "9c8ee97d3068759c0101bf88684abab77183374dbb3bb40f7c0b25d385992ea6"}}, "download_size": 7190370211, "post_processing_size": null, "dataset_size": 6330288, "size_in_bytes": 7196700499}, "ks": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://www.tensorflow.org/datasets/catalog/speech_commands", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 12, "names": ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go", "_silence_", "_unknown_"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "label"}, "task_templates": null, "builder_name": "superb", "config_name": "ks", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8467781, "num_examples": 51094, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 1126476, "num_examples": 6798, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 510619, "num_examples": 3081, "dataset_name": "superb"}}, "download_checksums": {"http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz": {"num_bytes": 1489096277, "checksum": "743935421bb51cccdb6bdd152e04c5c70274e935c82119ad7faeec31780d811d"}, "http://download.tensorflow.org/data/speech_commands_test_set_v0.01.tar.gz": {"num_bytes": 71271436, "checksum": "baa084f6b62c91de660ff0588ae4dfc4e4d534aa99ac0e5f406cba75836cbd00"}}, "download_size": 1560367713, "post_processing_size": null, "dataset_size": 10104876, "size_in_bytes": 1570472589}}
|
dummy/ks/1.9.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eb2268a62de98190c456b9bc7eefdbd749b732a49068b54767e94250475aee08
|
3 |
+
size 4763
|
dummy/sd/1.9.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6ba0c33b119a2c65fd6748fd623d14adde2ca70536fa5475f848316441b25c0a
|
3 |
+
size 3463
|
superb.py
CHANGED
@@ -20,6 +20,7 @@
|
|
20 |
import glob
|
21 |
import os
|
22 |
import textwrap
|
|
|
23 |
|
24 |
import datasets
|
25 |
from datasets.tasks import AutomaticSpeechRecognition
|
@@ -103,14 +104,18 @@ class SuperbConfig(datasets.BuilderConfig):
|
|
103 |
|
104 |
def __init__(
|
105 |
self,
|
|
|
106 |
data_url,
|
107 |
url,
|
|
|
108 |
task_templates=None,
|
109 |
**kwargs,
|
110 |
):
|
111 |
-
super(
|
|
|
112 |
self.data_url = data_url
|
113 |
self.url = url
|
|
|
114 |
self.task_templates = task_templates
|
115 |
|
116 |
|
@@ -129,15 +134,6 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
129 |
training/validation/testing. The evaluation metric is word error
|
130 |
rate (WER)."""
|
131 |
),
|
132 |
-
url="http://www.openslr.org/12",
|
133 |
-
data_url="http://www.openslr.org/resources/12/",
|
134 |
-
task_templates=[AutomaticSpeechRecognition(audio_file_path_column="file", transcription_column="text")],
|
135 |
-
)
|
136 |
-
]
|
137 |
-
|
138 |
-
def _info(self):
|
139 |
-
return datasets.DatasetInfo(
|
140 |
-
description=_DESCRIPTION,
|
141 |
features=datasets.Features(
|
142 |
{
|
143 |
"file": datasets.Value("string"),
|
@@ -148,6 +144,82 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
148 |
}
|
149 |
),
|
150 |
supervised_keys=("file", "text"),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
151 |
homepage=self.config.url,
|
152 |
citation=_CITATION,
|
153 |
task_templates=self.config.task_templates,
|
@@ -169,24 +241,235 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
169 |
),
|
170 |
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path["test"]}),
|
171 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
-
def _generate_examples(self, archive_path):
|
174 |
"""Generate examples."""
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
line
|
182 |
-
|
183 |
-
|
184 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
185 |
yield key, {
|
186 |
-
"
|
187 |
-
"
|
188 |
-
"
|
189 |
-
"
|
190 |
-
"
|
191 |
}
|
192 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
import glob
|
21 |
import os
|
22 |
import textwrap
|
23 |
+
from dataclasses import dataclass
|
24 |
|
25 |
import datasets
|
26 |
from datasets.tasks import AutomaticSpeechRecognition
|
|
|
104 |
|
105 |
def __init__(
|
106 |
self,
|
107 |
+
features,
|
108 |
data_url,
|
109 |
url,
|
110 |
+
supervised_keys=None,
|
111 |
task_templates=None,
|
112 |
**kwargs,
|
113 |
):
|
114 |
+
super().__init__(version=datasets.Version("1.9.0", ""), **kwargs)
|
115 |
+
self.features = features
|
116 |
self.data_url = data_url
|
117 |
self.url = url
|
118 |
+
self.supervised_keys = supervised_keys
|
119 |
self.task_templates = task_templates
|
120 |
|
121 |
|
|
|
134 |
training/validation/testing. The evaluation metric is word error
|
135 |
rate (WER)."""
|
136 |
),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
137 |
features=datasets.Features(
|
138 |
{
|
139 |
"file": datasets.Value("string"),
|
|
|
144 |
}
|
145 |
),
|
146 |
supervised_keys=("file", "text"),
|
147 |
+
url="http://www.openslr.org/12",
|
148 |
+
data_url="http://www.openslr.org/resources/12/",
|
149 |
+
task_templates=[AutomaticSpeechRecognition(audio_file_path_column="file", transcription_column="text")],
|
150 |
+
),
|
151 |
+
SuperbConfig(
|
152 |
+
name="ks",
|
153 |
+
description=textwrap.dedent(
|
154 |
+
"""\
|
155 |
+
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
|
156 |
+
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
|
157 |
+
inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0] for the task.
|
158 |
+
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
|
159 |
+
false positive. The evaluation metric is accuracy (ACC)"""
|
160 |
+
),
|
161 |
+
features=datasets.Features(
|
162 |
+
{
|
163 |
+
"file": datasets.Value("string"),
|
164 |
+
"label": datasets.ClassLabel(
|
165 |
+
names=[
|
166 |
+
"yes",
|
167 |
+
"no",
|
168 |
+
"up",
|
169 |
+
"down",
|
170 |
+
"left",
|
171 |
+
"right",
|
172 |
+
"on",
|
173 |
+
"off",
|
174 |
+
"stop",
|
175 |
+
"go",
|
176 |
+
"_silence_",
|
177 |
+
"_unknown_",
|
178 |
+
]
|
179 |
+
),
|
180 |
+
}
|
181 |
+
),
|
182 |
+
supervised_keys=("file", "label"),
|
183 |
+
url="https://www.tensorflow.org/datasets/catalog/speech_commands",
|
184 |
+
data_url="http://download.tensorflow.org/data/{filename}",
|
185 |
+
),
|
186 |
+
SuperbConfig(
|
187 |
+
name="sd",
|
188 |
+
description=textwrap.dedent(
|
189 |
+
"""\
|
190 |
+
Speaker Diarization (SD) predicts `who is speaking when` for each timestamp, and multiple speakers can
|
191 |
+
speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be
|
192 |
+
able to represent mixtures of signals. [LibriMix] is adopted where LibriSpeech
|
193 |
+
train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing.
|
194 |
+
We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using
|
195 |
+
alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER)."""
|
196 |
+
),
|
197 |
+
features=datasets.Features(
|
198 |
+
{
|
199 |
+
"record_id": datasets.Value("string"),
|
200 |
+
"file": datasets.Value("string"),
|
201 |
+
"start": datasets.Value("int64"),
|
202 |
+
"end": datasets.Value("int64"),
|
203 |
+
"speakers": [
|
204 |
+
{
|
205 |
+
"speaker_id": datasets.Value("string"),
|
206 |
+
"start": datasets.Value("int64"),
|
207 |
+
"end": datasets.Value("int64"),
|
208 |
+
}
|
209 |
+
],
|
210 |
+
}
|
211 |
+
), # TODO
|
212 |
+
supervised_keys=None, # TODO
|
213 |
+
url="https://github.com/ftshijt/LibriMix",
|
214 |
+
data_url="https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/{split}/{filename}",
|
215 |
+
),
|
216 |
+
]
|
217 |
+
|
218 |
+
def _info(self):
|
219 |
+
return datasets.DatasetInfo(
|
220 |
+
description=_DESCRIPTION,
|
221 |
+
features=self.config.features,
|
222 |
+
supervised_keys=self.config.supervised_keys,
|
223 |
homepage=self.config.url,
|
224 |
citation=_CITATION,
|
225 |
task_templates=self.config.task_templates,
|
|
|
241 |
),
|
242 |
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path["test"]}),
|
243 |
]
|
244 |
+
elif self.config.name == "ks":
|
245 |
+
_DL_URLS = {
|
246 |
+
"train_val_test": self.config.data_url.format(filename="speech_commands_v0.01.tar.gz"),
|
247 |
+
"test": self.config.data_url.format(filename="speech_commands_test_set_v0.01.tar.gz"),
|
248 |
+
}
|
249 |
+
archive_path = dl_manager.download_and_extract(_DL_URLS)
|
250 |
+
return [
|
251 |
+
datasets.SplitGenerator(
|
252 |
+
name=datasets.Split.TRAIN,
|
253 |
+
gen_kwargs={"archive_path": archive_path["train_val_test"], "split": "train"},
|
254 |
+
),
|
255 |
+
datasets.SplitGenerator(
|
256 |
+
name=datasets.Split.VALIDATION,
|
257 |
+
gen_kwargs={"archive_path": archive_path["train_val_test"], "split": "val"},
|
258 |
+
),
|
259 |
+
datasets.SplitGenerator(
|
260 |
+
name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path["test"], "split": "test"}
|
261 |
+
),
|
262 |
+
]
|
263 |
+
elif self.config.name == "sd":
|
264 |
+
splits = ["train", "dev", "test"]
|
265 |
+
_DL_URLS = {
|
266 |
+
split: {
|
267 |
+
filename: self.config.data_url.format(split=split, filename=filename)
|
268 |
+
for filename in ["reco2dur", "segments", "utt2spk", "wav.zip"]
|
269 |
+
}
|
270 |
+
for split in splits
|
271 |
+
}
|
272 |
+
archive_path = dl_manager.download_and_extract(_DL_URLS)
|
273 |
+
return [
|
274 |
+
datasets.SplitGenerator(
|
275 |
+
name=datasets.NamedSplit(split), gen_kwargs={"archive_path": archive_path[split], "split": split}
|
276 |
+
)
|
277 |
+
for split in splits
|
278 |
+
]
|
279 |
|
280 |
+
def _generate_examples(self, archive_path, split=None):
|
281 |
"""Generate examples."""
|
282 |
+
if self.config.name == "asr":
|
283 |
+
transcripts_glob = os.path.join(archive_path, "LibriSpeech", "*/*/*/*.txt")
|
284 |
+
key = 0
|
285 |
+
for transcript_path in sorted(glob.glob(transcripts_glob)):
|
286 |
+
transcript_dir_path = os.path.dirname(transcript_path)
|
287 |
+
with open(transcript_path, "r", encoding="utf-8") as f:
|
288 |
+
for line in f:
|
289 |
+
line = line.strip()
|
290 |
+
id_, transcript = line.split(" ", 1)
|
291 |
+
audio_file = f"{id_}.flac"
|
292 |
+
speaker_id, chapter_id = [int(el) for el in id_.split("-")[:2]]
|
293 |
+
yield key, {
|
294 |
+
"id": id_,
|
295 |
+
"speaker_id": speaker_id,
|
296 |
+
"chapter_id": chapter_id,
|
297 |
+
"file": os.path.join(transcript_dir_path, audio_file),
|
298 |
+
"text": transcript,
|
299 |
+
}
|
300 |
+
key += 1
|
301 |
+
elif self.config.name == "ks":
|
302 |
+
words = ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go"]
|
303 |
+
splits = _split_ks_files(archive_path, split)
|
304 |
+
for key, audio_file in enumerate(sorted(splits[split])):
|
305 |
+
base_dir, file_name = os.path.split(audio_file)
|
306 |
+
_, word = os.path.split(base_dir)
|
307 |
+
if word in words:
|
308 |
+
label = word
|
309 |
+
elif word == "_silence_" or word == "_background_noise_":
|
310 |
+
label = "_silence_"
|
311 |
+
else:
|
312 |
+
label = "_unknown_"
|
313 |
+
yield key, {"file": audio_file, "label": label}
|
314 |
+
elif self.config.name == "sd":
|
315 |
+
data = SdData(archive_path)
|
316 |
+
args = SdArgs()
|
317 |
+
chunk_indices = _generate_chunk_indices(data, args, split=split)
|
318 |
+
if split != "test":
|
319 |
+
for key, (rec, st, ed) in enumerate(chunk_indices):
|
320 |
+
speakers = _get_speakers(rec, data, args)
|
321 |
yield key, {
|
322 |
+
"record_id": rec,
|
323 |
+
"file": data.wavs[rec],
|
324 |
+
"start": st,
|
325 |
+
"end": ed,
|
326 |
+
"speakers": speakers,
|
327 |
}
|
328 |
+
else:
|
329 |
+
key = 0
|
330 |
+
for rec in chunk_indices:
|
331 |
+
for rec, st, ed in chunk_indices[rec]:
|
332 |
+
speakers = _get_speakers(rec, data, args)
|
333 |
+
yield key, {
|
334 |
+
"record_id": rec,
|
335 |
+
"file": data.wavs[rec],
|
336 |
+
"start": st,
|
337 |
+
"end": ed,
|
338 |
+
"speakers": speakers,
|
339 |
+
}
|
340 |
+
key += 1
|
341 |
+
|
342 |
+
|
343 |
+
class SdData:
|
344 |
+
def __init__(self, data_dir):
|
345 |
+
"""Load sd data."""
|
346 |
+
self.segments = self._load_segments_rechash(data_dir["segments"])
|
347 |
+
self.utt2spk = self._load_utt2spk(data_dir["utt2spk"])
|
348 |
+
self.wavs = self._load_wav_zip(data_dir["wav.zip"])
|
349 |
+
self.reco2dur = self._load_reco2dur(data_dir["reco2dur"])
|
350 |
+
|
351 |
+
def _load_segments_rechash(self, segments_file):
|
352 |
+
"""Load segments file as dict with recid index."""
|
353 |
+
ret = {}
|
354 |
+
if not os.path.exists(segments_file):
|
355 |
+
return None
|
356 |
+
with open(segments_file, encoding="utf-8") as f:
|
357 |
+
for line in f:
|
358 |
+
utt, rec, st, et = line.strip().split()
|
359 |
+
if rec not in ret:
|
360 |
+
ret[rec] = []
|
361 |
+
ret[rec].append({"utt": utt, "st": float(st), "et": float(et)})
|
362 |
+
return ret
|
363 |
+
|
364 |
+
def _load_wav_zip(self, wav_zip):
|
365 |
+
"""Return dictionary { rec: wav_rxfilename }."""
|
366 |
+
wav_dir = os.path.join(wav_zip, "wav")
|
367 |
+
return {
|
368 |
+
os.path.splitext(filename)[0]: os.path.join(wav_dir, filename) for filename in sorted(os.listdir(wav_dir))
|
369 |
+
}
|
370 |
+
|
371 |
+
def _load_utt2spk(self, utt2spk_file):
|
372 |
+
"""Returns dictionary { uttid: spkid }."""
|
373 |
+
with open(utt2spk_file, encoding="utf-8") as f:
|
374 |
+
lines = [line.strip().split(None, 1) for line in f]
|
375 |
+
return {x[0]: x[1] for x in lines}
|
376 |
+
|
377 |
+
def _load_reco2dur(self, reco2dur_file):
|
378 |
+
"""Returns dictionary { recid: duration }."""
|
379 |
+
if not os.path.exists(reco2dur_file):
|
380 |
+
return None
|
381 |
+
with open(reco2dur_file, encoding="utf-8") as f:
|
382 |
+
lines = [line.strip().split(None, 1) for line in f]
|
383 |
+
return {x[0]: float(x[1]) for x in lines}
|
384 |
+
|
385 |
+
|
386 |
+
@dataclass
|
387 |
+
class SdArgs:
|
388 |
+
chunk_size: int = 2000
|
389 |
+
frame_shift: int = 160
|
390 |
+
subsampling: int = 1
|
391 |
+
label_delay: int = 0
|
392 |
+
num_speakers: int = 2
|
393 |
+
rate: int = 16000
|
394 |
+
use_last_samples: bool = True
|
395 |
+
|
396 |
+
|
397 |
+
def _generate_chunk_indices(data, args, split=None):
|
398 |
+
chunk_indices = [] if split != "test" else {}
|
399 |
+
# make chunk indices: filepath, start_frame, end_frame
|
400 |
+
for rec in data.wavs:
|
401 |
+
data_len = int(data.reco2dur[rec] * args.rate / args.frame_shift)
|
402 |
+
data_len = int(data_len / args.subsampling)
|
403 |
+
if split == "test":
|
404 |
+
chunk_indices[rec] = []
|
405 |
+
if split != "test":
|
406 |
+
for st, ed in _gen_frame_indices(
|
407 |
+
data_len,
|
408 |
+
args.chunk_size,
|
409 |
+
args.chunk_size,
|
410 |
+
args.use_last_samples,
|
411 |
+
label_delay=args.label_delay,
|
412 |
+
subsampling=args.subsampling,
|
413 |
+
):
|
414 |
+
chunk_indices.append((rec, st * args.subsampling, ed * args.subsampling))
|
415 |
+
else:
|
416 |
+
for st, ed in _gen_chunk_indices(data_len, args.chunk_size):
|
417 |
+
chunk_indices[rec].append((rec, st * args.subsampling, ed * args.subsampling))
|
418 |
+
return chunk_indices
|
419 |
+
|
420 |
+
|
421 |
+
def _count_frames(data_len, size, step):
|
422 |
+
# no padding at edges, last remaining samples are ignored
|
423 |
+
return int((data_len - size + step) / step)
|
424 |
+
|
425 |
+
|
426 |
+
def _gen_frame_indices(data_length, size=2000, step=2000, use_last_samples=False, label_delay=0, subsampling=1):
|
427 |
+
i = -1
|
428 |
+
for i in range(_count_frames(data_length, size, step)):
|
429 |
+
yield i * step, i * step + size
|
430 |
+
if use_last_samples and i * step + size < data_length:
|
431 |
+
if data_length - (i + 1) * step - subsampling * label_delay > 0:
|
432 |
+
yield (i + 1) * step, data_length
|
433 |
+
|
434 |
+
|
435 |
+
def _gen_chunk_indices(data_len, chunk_size):
|
436 |
+
step = chunk_size
|
437 |
+
start = 0
|
438 |
+
while start < data_len:
|
439 |
+
end = min(data_len, start + chunk_size)
|
440 |
+
yield start, end
|
441 |
+
start += step
|
442 |
+
|
443 |
+
|
444 |
+
def _get_speakers(rec, data, args):
|
445 |
+
return [
|
446 |
+
{
|
447 |
+
"speaker_id": data.utt2spk[segment["utt"]],
|
448 |
+
"start": round(segment["st"] * args.rate / args.frame_shift),
|
449 |
+
"end": round(segment["et"] * args.rate / args.frame_shift),
|
450 |
+
}
|
451 |
+
for segment in data.segments[rec]
|
452 |
+
]
|
453 |
+
|
454 |
+
|
455 |
+
def _split_ks_files(archive_path, split):
|
456 |
+
audio_path = os.path.join(archive_path, "**/*.wav")
|
457 |
+
audio_paths = glob.glob(audio_path)
|
458 |
+
if split == "test":
|
459 |
+
# use all available files for the test archive
|
460 |
+
return {"test": audio_paths}
|
461 |
+
|
462 |
+
val_list_file = os.path.join(archive_path, "validation_list.txt")
|
463 |
+
test_list_file = os.path.join(archive_path, "testing_list.txt")
|
464 |
+
with open(val_list_file, encoding="utf-8") as f:
|
465 |
+
val_paths = f.read().strip().splitlines()
|
466 |
+
val_paths = [os.path.join(archive_path, p) for p in val_paths]
|
467 |
+
with open(test_list_file, encoding="utf-8") as f:
|
468 |
+
test_paths = f.read().strip().splitlines()
|
469 |
+
test_paths = [os.path.join(archive_path, p) for p in test_paths]
|
470 |
+
|
471 |
+
# the paths for the train set is just whichever paths that do not exist in
|
472 |
+
# either the test or validation splits
|
473 |
+
train_paths = list(set(audio_paths) - set(val_paths) - set(test_paths))
|
474 |
+
|
475 |
+
return {"train": train_paths, "val": val_paths}
|