wav2vec2-Y_speed_pause2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.6467
- Cer: 38.8660
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Cer |
---|---|---|---|---|
35.4246 | 0.1290 | 200 | 5.3966 | 100.0 |
5.0597 | 0.2581 | 400 | 4.7304 | 100.0 |
4.9257 | 0.3871 | 600 | 4.7995 | 100.0 |
4.8261 | 0.5161 | 800 | 4.6200 | 100.0 |
4.7176 | 0.6452 | 1000 | 4.5602 | 98.3079 |
4.5324 | 0.7742 | 1200 | 4.4388 | 97.9671 |
3.7536 | 0.9032 | 1400 | 3.6145 | 73.7427 |
2.8505 | 1.0323 | 1600 | 3.0887 | 59.1128 |
2.4185 | 1.1613 | 1800 | 2.6231 | 53.6780 |
2.1072 | 1.2903 | 2000 | 2.4919 | 52.2679 |
1.8961 | 1.4194 | 2200 | 2.3102 | 50.1469 |
1.7491 | 1.5484 | 2400 | 2.3485 | 49.7415 |
1.5897 | 1.6774 | 2600 | 2.0813 | 45.6580 |
1.4517 | 1.8065 | 2800 | 1.9086 | 44.1128 |
1.3285 | 1.9355 | 3000 | 1.9665 | 43.8425 |
1.2685 | 2.0645 | 3200 | 1.8661 | 43.1845 |
1.1435 | 2.1935 | 3400 | 1.8922 | 43.9600 |
1.0695 | 2.3226 | 3600 | 1.7025 | 40.4877 |
1.0431 | 2.4516 | 3800 | 1.6870 | 39.6357 |
0.9992 | 2.5806 | 4000 | 1.7723 | 41.7450 |
0.9622 | 2.7097 | 4200 | 1.6614 | 39.0834 |
0.9273 | 2.8387 | 4400 | 1.6228 | 38.4724 |
0.9176 | 2.9677 | 4600 | 1.6467 | 38.8660 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Gummybear05/wav2vec2-Y_speed_pause2
Base model
facebook/wav2vec2-xls-r-300m