wav2vec2transformerEMR4

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3565
  • Accuracy: 0.4562
  • Precision: 0.5012
  • Recall: 0.4562
  • F1: 0.4643

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
1.9245 0.4108 250 1.9090 0.1981 0.1953 0.1981 0.1794
1.8806 0.8217 500 1.8641 0.2289 0.2390 0.2289 0.1796
1.8357 1.2325 750 1.8233 0.2417 0.2369 0.2417 0.2167
1.7745 1.6434 1000 1.7732 0.2713 0.2984 0.2713 0.2736
1.7518 2.0542 1250 1.7497 0.2741 0.2975 0.2741 0.2529
1.7009 2.4651 1500 1.6719 0.3321 0.3849 0.3321 0.3177
1.6498 2.8759 1750 1.6159 0.3646 0.4064 0.3646 0.3554
1.597 3.2868 2000 1.5689 0.4065 0.4511 0.4065 0.3915
1.5607 3.6976 2250 1.5324 0.4040 0.4374 0.4040 0.3919
1.5268 4.1085 2500 1.4833 0.4287 0.4760 0.4287 0.4339
1.4696 4.5193 2750 1.4981 0.4254 0.4939 0.4254 0.4137
1.455 4.9302 3000 1.4694 0.4332 0.4876 0.4332 0.4242
1.4473 5.3410 3250 1.4505 0.4254 0.4688 0.4254 0.4128
1.4134 5.7518 3500 1.4207 0.4406 0.5431 0.4406 0.4514
1.3963 6.1627 3750 1.4242 0.4427 0.4593 0.4427 0.4440
1.3909 6.5735 4000 1.3723 0.4632 0.5042 0.4632 0.4707
1.3539 6.9844 4250 1.3810 0.4513 0.4797 0.4513 0.4531
1.3561 7.3952 4500 1.3852 0.4595 0.4977 0.4595 0.4609
1.373 7.8061 4750 1.3906 0.4505 0.5019 0.4505 0.4540
1.3812 8.2169 5000 1.3636 0.4731 0.4975 0.4731 0.4715
1.3208 8.6278 5250 1.3825 0.4538 0.5153 0.4538 0.4574
1.3025 9.0386 5500 1.3696 0.4649 0.4963 0.4649 0.4683
1.3109 9.4495 5750 1.3573 0.4681 0.5135 0.4681 0.4746
1.3264 9.8603 6000 1.3491 0.4686 0.5117 0.4686 0.4741

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Tokenizers 0.20.3
Downloads last month
75
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sagir567/wav2vec2transformerEMR4

Finetuned
(707)
this model