thomas0104 commited on
Commit
f3c836b
·
1 Parent(s): f4f3f57

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - mozilla-foundation/common_voice_11_0
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Whisper large-v2 nan-tw
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: mozilla-foundation/common_voice_11_0 nan-tw
20
+ type: mozilla-foundation/common_voice_11_0
21
+ config: nan-tw
22
+ split: train
23
+ args: nan-tw
24
+ metrics:
25
+ - name: Wer
26
+ type: wer
27
+ value: 118.50381679389312
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # Whisper large-v2 nan-tw
34
+
35
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 nan-tw dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 3.2129
38
+ - Wer: 118.5038
39
+ - Cer: 123.4531
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 2
60
+ - eval_batch_size: 2
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 5000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
72
+ | 3.5696 | 1.04 | 1000 | 3.4190 | 96.8550 | 96.8910 |
73
+ | 3.1453 | 2.08 | 2000 | 3.2383 | 98.9313 | 98.9436 |
74
+ | 3.0722 | 3.13 | 3000 | 3.2043 | 129.0687 | 158.5270 |
75
+ | 2.8327 | 5.01 | 4000 | 3.2258 | 327.9084 | 333.0516 |
76
+ | 2.6468 | 6.05 | 5000 | 3.2129 | 118.5038 | 123.4531 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.25.1
82
+ - Pytorch 1.13.1+cu117
83
+ - Datasets 2.8.0
84
+ - Tokenizers 0.13.2