KevinKibe commited on
Commit
8f7958b
·
verified ·
1 Parent(s): 8a54d00

Model save

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -14,19 +14,19 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/3b4u4ih8)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/3b4u4ih8)
19
  # whisper-small-finetuned-finetuned
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_16_1 dataset.
22
  It achieves the following results on the evaluation set:
23
- - eval_loss: 4.2187
24
- - eval_wer: 277.0115
25
- - eval_runtime: 37.2641
26
- - eval_samples_per_second: 0.268
27
- - eval_steps_per_second: 0.027
28
- - epoch: 19.05
29
- - step: 20
30
 
31
  ## Model description
32
 
@@ -46,18 +46,19 @@ More information needed
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0001
49
- - train_batch_size: 16
50
- - eval_batch_size: 16
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
- - lr_scheduler_type: linear
54
- - training_steps: 20
 
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Framework versions
58
 
59
  - PEFT 0.11.1
60
  - Transformers 4.42.3
61
- - Pytorch 2.2.2+cu121
62
  - Datasets 2.19.2
63
  - Tokenizers 0.19.1
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/7iywhhxw)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/7iywhhxw)
19
  # whisper-small-finetuned-finetuned
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_16_1 dataset.
22
  It achieves the following results on the evaluation set:
23
+ - eval_loss: 3.4388
24
+ - eval_wer: 101.3774
25
+ - eval_runtime: 370.6019
26
+ - eval_samples_per_second: 0.27
27
+ - eval_steps_per_second: 0.011
28
+ - epoch: 6.008
29
+ - step: 100
30
 
31
  ## Model description
32
 
 
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0001
49
+ - train_batch_size: 32
50
+ - eval_batch_size: 32
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
+ - lr_scheduler_type: constant_with_warmup
54
+ - lr_scheduler_warmup_steps: 10
55
+ - training_steps: 500
56
  - mixed_precision_training: Native AMP
57
 
58
  ### Framework versions
59
 
60
  - PEFT 0.11.1
61
  - Transformers 4.42.3
62
+ - Pytorch 2.4.1+cu121
63
  - Datasets 2.19.2
64
  - Tokenizers 0.19.1