End of training
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ license: apache-2.0
|
|
4 |
base_model: openai/whisper-small
|
5 |
tags:
|
6 |
- generated_from_trainer
|
|
|
|
|
7 |
model-index:
|
8 |
- name: whisper-small-Dzo
|
9 |
results: []
|
@@ -15,6 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
# whisper-small-Dzo
|
16 |
|
17 |
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
|
|
|
|
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
@@ -34,17 +39,22 @@ More information needed
|
|
34 |
|
35 |
The following hyperparameters were used during training:
|
36 |
- learning_rate: 1e-05
|
37 |
-
- train_batch_size:
|
38 |
-
- eval_batch_size:
|
39 |
- seed: 42
|
40 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
41 |
- lr_scheduler_type: linear
|
42 |
- lr_scheduler_warmup_steps: 500
|
43 |
-
- training_steps:
|
44 |
- mixed_precision_training: Native AMP
|
45 |
|
46 |
### Training results
|
47 |
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
|
50 |
### Framework versions
|
|
|
4 |
base_model: openai/whisper-small
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
+
metrics:
|
8 |
+
- wer
|
9 |
model-index:
|
10 |
- name: whisper-small-Dzo
|
11 |
results: []
|
|
|
17 |
# whisper-small-Dzo
|
18 |
|
19 |
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
|
20 |
+
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.6577
|
22 |
+
- Wer: 360.5263
|
23 |
|
24 |
## Model description
|
25 |
|
|
|
39 |
|
40 |
The following hyperparameters were used during training:
|
41 |
- learning_rate: 1e-05
|
42 |
+
- train_batch_size: 8
|
43 |
+
- eval_batch_size: 8
|
44 |
- seed: 42
|
45 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
46 |
- lr_scheduler_type: linear
|
47 |
- lr_scheduler_warmup_steps: 500
|
48 |
+
- training_steps: 0
|
49 |
- mixed_precision_training: Native AMP
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
54 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
55 |
+
| No log | 1.0 | 20 | 1.5041 | 452.6316 |
|
56 |
+
| 1.5599 | 2.0 | 40 | 0.8098 | 489.4737 |
|
57 |
+
| 0.8495 | 3.0 | 60 | 0.6577 | 360.5263 |
|
58 |
|
59 |
|
60 |
### Framework versions
|
generation_config.json
CHANGED
@@ -150,7 +150,7 @@
|
|
150 |
"<|yo|>": 50325,
|
151 |
"<|zh|>": 50260
|
152 |
},
|
153 |
-
"language": "
|
154 |
"max_initial_timestamp_index": 50,
|
155 |
"max_length": 448,
|
156 |
"no_timestamps_token_id": 50363,
|
|
|
150 |
"<|yo|>": 50325,
|
151 |
"<|zh|>": 50260
|
152 |
},
|
153 |
+
"language": "bo",
|
154 |
"max_initial_timestamp_index": 50,
|
155 |
"max_length": 448,
|
156 |
"no_timestamps_token_id": 50363,
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 966995080
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9cde550b0003abb581a81c0cdcf5f9b84b4f575eb38480a5c4cbe4b16ae6a7cb
|
3 |
size 966995080
|
runs/Jan22_08-16-20_a4970eb8e8c6/events.out.tfevents.1737533790.a4970eb8e8c6.1084.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b61da17d77cf6af0ab4f36f2029b0cb60c325d69c66c2316647279ed99b1b7a6
|
3 |
+
size 8520
|