fahadqazi commited on
Commit
523b6e3
·
verified ·
1 Parent(s): 0a40f9a

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,12 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [fahadqazi/Sindhi-TTS](https://huggingface.co/fahadqazi/Sindhi-TTS) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4421
 
 
 
 
 
20
 
21
  ## Model description
22
 
@@ -35,34 +40,18 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 5e-06
39
- - train_batch_size: 8
40
  - eval_batch_size: 2
41
  - seed: 42
42
- - gradient_accumulation_steps: 4
43
  - total_train_batch_size: 32
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 200
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 0.4462 | 0.2042 | 100 | 0.4506 |
55
- | 0.4456 | 0.4084 | 200 | 0.4482 |
56
- | 0.4424 | 0.6126 | 300 | 0.4468 |
57
- | 0.4501 | 0.8167 | 400 | 0.4429 |
58
- | 0.4393 | 1.0219 | 500 | 0.4435 |
59
- | 0.4518 | 1.2261 | 600 | 0.4439 |
60
- | 0.4459 | 1.4303 | 700 | 0.4423 |
61
- | 0.4404 | 1.6345 | 800 | 0.4430 |
62
- | 0.4414 | 1.8387 | 900 | 0.4423 |
63
- | 0.4396 | 2.0439 | 1000 | 0.4421 |
64
-
65
-
66
  ### Framework versions
67
 
68
  - Transformers 4.46.2
 
16
 
17
  This model is a fine-tuned version of [fahadqazi/Sindhi-TTS](https://huggingface.co/fahadqazi/Sindhi-TTS) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.4602
20
+ - eval_runtime: 47.8291
21
+ - eval_samples_per_second: 36.421
22
+ - eval_steps_per_second: 18.211
23
+ - epoch: 13.2653
24
+ - step: 6500
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 0.0001
44
+ - train_batch_size: 16
45
  - eval_batch_size: 2
46
  - seed: 42
47
+ - gradient_accumulation_steps: 2
48
  - total_train_batch_size: 32
49
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_steps: 200
52
+ - training_steps: 10000
53
  - mixed_precision_training: Native AMP
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ### Framework versions
56
 
57
  - Transformers 4.46.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3937bf7d83f452fa0aabe6f63e8cfee13bbbf7290b5f1e31648498c39ed0ed66
3
  size 617574792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61247efcd96572c3f52f6d23bee60e3ae41a058b91d77f8b0abe3b846cb6bf26
3
  size 617574792
runs/Nov18_22-15-13_40219603e0ec/events.out.tfevents.1731968118.40219603e0ec.425.5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c164118656598e4475bc2aecced580892ad826c2dd2e81248d4d0ee355a03baa
3
- size 77955
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:406858290ef9506da99461a7020e3022af8fbf919e69f0d09eafb46e3307c7ce
3
+ size 79070