Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Whisper Small_LoRA Ha - Eldad Akhaumere
This model is a fine-tuned version of openai/whisper-small on the Common Voice 16.0_ dataset. It achieves the following results on the evaluation set:
- Loss: 2.0708
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- num_epochs: 20.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.0616 | 1.5924 | 250 | 2.9990 |
2.1053 | 3.1847 | 500 | 2.1317 |
2.0123 | 4.7771 | 750 | 2.0903 |
1.9218 | 6.3694 | 1000 | 2.0661 |
1.894 | 7.9618 | 1250 | 2.0513 |
1.7961 | 9.5541 | 1500 | 2.0348 |
1.764 | 11.1465 | 1750 | 2.0300 |
1.748 | 12.7389 | 2000 | 2.0210 |
1.6554 | 14.3312 | 2250 | 2.0353 |
1.6556 | 15.9236 | 2500 | 2.0369 |
1.6059 | 17.5159 | 2750 | 2.0441 |
1.5215 | 19.1083 | 3000 | 2.0708 |
Framework versions
- PEFT 0.14.1.dev0
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 17
Model tree for eldad-akhaumere/whisper-LoRA-small-ha
Base model
openai/whisper-small