my_awesome_asr_mind_model
This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-russian on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 8.6914
- Wer: 0.9317
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
16.4326 | 0.3436 | 100 | 26.2073 | 1.0 |
16.1132 | 0.6873 | 200 | 13.6308 | 1.0 |
3.3225 | 1.0309 | 300 | 10.8892 | 1.0 |
3.6426 | 1.3746 | 400 | 6.1344 | 0.9985 |
2.6996 | 1.7182 | 500 | 4.0074 | 0.9930 |
2.2841 | 2.0619 | 600 | 4.9802 | 0.9742 |
2.2091 | 2.4055 | 700 | 7.2261 | 0.9517 |
2.089 | 2.7491 | 800 | 8.6914 | 0.9317 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.