End of training
Browse files- README.md +20 -8
- generation_config.json +1 -2
- model.safetensors +1 -1
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
datasets:
|
@@ -15,14 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
# speecht5_sindhi
|
17 |
|
18 |
-
This model is a fine-tuned version of [
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
-
|
21 |
-
- eval_runtime: 14.4754
|
22 |
-
- eval_samples_per_second: 19.205
|
23 |
-
- eval_steps_per_second: 9.603
|
24 |
-
- epoch: 6.3898
|
25 |
-
- step: 500
|
26 |
|
27 |
## Model description
|
28 |
|
@@ -53,6 +49,22 @@ The following hyperparameters were used during training:
|
|
53 |
- training_steps: 1000
|
54 |
- mixed_precision_training: Native AMP
|
55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
### Framework versions
|
57 |
|
58 |
- Transformers 4.44.2
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
base_model: MBZUAI/speecht5_tts_clartts_ar
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
|
|
16 |
|
17 |
# speecht5_sindhi
|
18 |
|
19 |
+
This model is a fine-tuned version of [MBZUAI/speecht5_tts_clartts_ar](https://huggingface.co/MBZUAI/speecht5_tts_clartts_ar) on the fleurs dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.4007
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
49 |
- training_steps: 1000
|
50 |
- mixed_precision_training: Native AMP
|
51 |
|
52 |
+
### Training results
|
53 |
+
|
54 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
55 |
+
|:-------------:|:-------:|:----:|:---------------:|
|
56 |
+
| 0.4691 | 2.5974 | 100 | 0.4582 |
|
57 |
+
| 0.4505 | 5.1948 | 200 | 0.4339 |
|
58 |
+
| 0.4399 | 7.7922 | 300 | 0.4288 |
|
59 |
+
| 0.4277 | 10.3896 | 400 | 0.4193 |
|
60 |
+
| 0.4188 | 12.9870 | 500 | 0.4128 |
|
61 |
+
| 0.4111 | 15.5844 | 600 | 0.4041 |
|
62 |
+
| 0.4085 | 18.1818 | 700 | 0.4025 |
|
63 |
+
| 0.4033 | 20.7792 | 800 | 0.4005 |
|
64 |
+
| 0.3992 | 23.3766 | 900 | 0.4000 |
|
65 |
+
| 0.399 | 25.9740 | 1000 | 0.4007 |
|
66 |
+
|
67 |
+
|
68 |
### Framework versions
|
69 |
|
70 |
- Transformers 4.44.2
|
generation_config.json
CHANGED
@@ -5,6 +5,5 @@
|
|
5 |
"eos_token_id": 2,
|
6 |
"max_length": 1876,
|
7 |
"pad_token_id": 1,
|
8 |
-
"transformers_version": "4.44.2"
|
9 |
-
"use_cache": false
|
10 |
}
|
|
|
5 |
"eos_token_id": 2,
|
6 |
"max_length": 1876,
|
7 |
"pad_token_id": 1,
|
8 |
+
"transformers_version": "4.44.2"
|
|
|
9 |
}
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 577902984
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84dbf7bbe48ea19d914eba07905d260e64bc5ce33fab2e8e03d4983e7e2ca86c
|
3 |
size 577902984
|