File size: 3,441 Bytes
505470b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
library_name: peft
language:
- it
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- ASR_BB_and_EC
metrics:
- wer
model-index:
- name: Whisper Large v3
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: ASR_BB_and_EC
type: ASR_BB_and_EC
config: default
split: train
args: default
metrics:
- type: wer
value: 145.18950437317784
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the ASR_BB_and_EC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Wer: 145.1895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.4667 | 0.1142 | 50 | 2.3898 | 126.5306 |
| 1.3136 | 0.2283 | 100 | 0.8863 | 65.3061 |
| 0.6296 | 0.3425 | 150 | 0.7403 | 55.9767 |
| 0.551 | 0.4566 | 200 | 0.6749 | 61.2245 |
| 0.4789 | 0.5708 | 250 | 0.6446 | 67.6385 |
| 0.4246 | 0.6849 | 300 | 0.5675 | 77.5510 |
| 0.3786 | 0.7991 | 350 | 0.5163 | 45.4810 |
| 0.3179 | 0.9132 | 400 | 0.4786 | 84.8397 |
| 0.3118 | 1.0274 | 450 | 0.4678 | 105.5394 |
| 0.2689 | 1.1416 | 500 | 0.4322 | 125.3644 |
| 0.2473 | 1.2557 | 550 | 0.3924 | 48.1050 |
| 0.2319 | 1.3699 | 600 | 0.3980 | 208.7464 |
| 0.2098 | 1.4840 | 650 | 0.3545 | 52.1866 |
| 0.2215 | 1.5982 | 700 | 0.3489 | 48.1050 |
| 0.1981 | 1.7123 | 750 | 0.3378 | 76.3848 |
| 0.1803 | 1.8265 | 800 | 0.3295 | 43.7318 |
| 0.1693 | 1.9406 | 850 | 0.3095 | 76.9679 |
| 0.1406 | 2.0548 | 900 | 0.2993 | 43.4402 |
| 0.1252 | 2.1689 | 950 | 0.2810 | 37.3178 |
| 0.111 | 2.2831 | 1000 | 0.2854 | 164.1399 |
| 0.1166 | 2.3973 | 1050 | 0.2752 | 124.4898 |
| 0.1183 | 2.5114 | 1100 | 0.2493 | 90.3790 |
| 0.1014 | 2.6256 | 1150 | 0.2441 | 210.2041 |
| 0.1076 | 2.7397 | 1200 | 0.2340 | 152.1866 |
| 0.0891 | 2.8539 | 1250 | 0.2312 | 214.5773 |
| 0.0841 | 2.9680 | 1300 | 0.2251 | 145.1895 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.2.0
- Datasets 3.1.0
- Tokenizers 0.20.3 |