greenw0lf commited on
Commit
9de47c8
·
1 Parent(s): 89798e5

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -47
README.md CHANGED
@@ -21,33 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.15077102723494865
25
- - task:
26
- name: Automatic Speech Recognition
27
- type: automatic-speech-recognition
28
- dataset:
29
- name: common_voice_13_0
30
- type: common_voice_13_0
31
- config: fy-NL
32
- split: test
33
- args: fy-NL
34
- metrics:
35
- - name: Wer
36
- type: wer
37
- value: 0.13990069099621516
38
- - task:
39
- name: Automatic Speech Recognition
40
- type: automatic-speech-recognition
41
- dataset:
42
- name: common_voice_8_0
43
- type: common_voice_8_0
44
- config: fy-NL
45
- split: test
46
- args: fy-NL
47
- metrics:
48
- - name: Wer
49
- type: wer
50
- value: 0.14409596762537938
51
  ---
52
 
53
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -57,31 +31,23 @@ should probably proofread and complete it, then remove this comment. -->
57
 
58
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_13_0 dataset.
59
  It achieves the following results on the evaluation set:
60
- - Loss: 0.2206
61
- - Wer: 0.1508
62
-
63
- And for the test set:
64
- - Wer: 0.1399
65
 
66
- When evaluated on common_voice_8_0 dataset:
67
- - Wer: 0.1441
68
 
69
- This model was developed together with [golesheed](https://huggingface.co/golesheed) for the course "Speech Recognition II" of the "MSc Voice Technology" program at Rijksuniversiteit Groningen - Campus Fryslân.
70
 
71
  ## Intended uses & limitations
72
 
73
- Intended use is for recognizing Frisian speech.
74
-
75
- Main limitation is no LM rescoring.
76
 
77
  ## Training and evaluation data
78
 
79
- Training and evaluation splits used are the ones available in the Common Voice dataset.
80
 
81
  ## Training procedure
82
 
83
- To be added later once the notebook used for training is pushed to GitHub.
84
-
85
  ### Training hyperparameters
86
 
87
  The following hyperparameters were used during training:
@@ -94,7 +60,7 @@ The following hyperparameters were used during training:
94
  - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
95
  - lr_scheduler_type: linear
96
  - lr_scheduler_warmup_ratio: 0.1
97
- - num_epochs: 60
98
  - mixed_precision_training: Native AMP
99
 
100
  ### Training results
@@ -120,11 +86,6 @@ The following hyperparameters were used during training:
120
  | 0.5204 | 41.76 | 5100 | 0.2181 | 0.1587 |
121
  | 0.512 | 44.21 | 5400 | 0.2263 | 0.1607 |
122
  | 0.465 | 46.66 | 5700 | 0.2204 | 0.1493 |
123
- | 0.4482 | 49.11 | 6000 | 0.2143 | 0.1527 |
124
- | 0.3972 | 51.63 | 6300 | 0.2198 | 0.1617 |
125
- | 0.3168 | 54.09 | 6600 | 0.2170 | 0.1528 |
126
- | 0.2432 | 56.53 | 6900 | 0.2182 | 0.1529 |
127
- | 0.252 | 58.98 | 7200 | 0.2206 | 0.1508 |
128
 
129
 
130
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.1492598825428444
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_13_0 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.2204
35
+ - Wer: 0.1493
 
 
 
36
 
37
+ ## Model description
 
38
 
39
+ More information needed
40
 
41
  ## Intended uses & limitations
42
 
43
+ More information needed
 
 
44
 
45
  ## Training and evaluation data
46
 
47
+ More information needed
48
 
49
  ## Training procedure
50
 
 
 
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
 
60
  - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 30
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
 
86
  | 0.5204 | 41.76 | 5100 | 0.2181 | 0.1587 |
87
  | 0.512 | 44.21 | 5400 | 0.2263 | 0.1607 |
88
  | 0.465 | 46.66 | 5700 | 0.2204 | 0.1493 |
 
 
 
 
 
89
 
90
 
91
  ### Framework versions