lhoestq HF staff commited on
Commit
d0401a8
·
1 Parent(s): 90cac85

Update datasets task tags to align tags with models (#4067)

Browse files

* update tasks list

* update tags in dataset cards

* more cards updates

* update dataset tags parser

* fix multi-choice-qa

* style

* small improvements in some dataset cards

* allow certain tag fields to be empty

* update vision datasets tags

* use multi-class-image-classification and remove other tags

Commit from https://github.com/huggingface/datasets/commit/edb4411d4e884690b8b328dba4360dbda6b3cbc8

Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -17,9 +17,10 @@ size_categories:
17
  source_datasets:
18
  - original
19
  task_categories:
20
- - speech-processing
21
- task_ids:
22
  - automatic-speech-recognition
 
 
 
23
  ---
24
 
25
  # Dataset Card for librispeech_asr
@@ -62,7 +63,7 @@ LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech
62
 
63
  ### Supported Tasks and Leaderboards
64
 
65
- - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER.
66
 
67
  ### Languages
68
 
 
17
  source_datasets:
18
  - original
19
  task_categories:
 
 
20
  - automatic-speech-recognition
21
+ - audio-classification
22
+ task_ids:
23
+ - audio-speaker-identification
24
  ---
25
 
26
  # Dataset Card for librispeech_asr
 
63
 
64
  ### Supported Tasks and Leaderboards
65
 
66
+ - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER.
67
 
68
  ### Languages
69