wav2vec-large-zh
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 14.5731
- Wer: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
121.3961 | 0.1039 | 100 | 124.9288 | 1.0 |
123.4081 | 0.2078 | 200 | 123.6992 | 1.0 |
107.9594 | 0.3117 | 300 | 93.1991 | 1.0 |
88.687 | 0.4156 | 400 | 74.7901 | 1.0 |
76.5985 | 0.5195 | 500 | 65.0490 | 1.0 |
67.3284 | 0.6234 | 600 | 59.1068 | 1.0 |
60.559 | 0.7273 | 700 | 54.5696 | 1.0 |
57.6745 | 0.8312 | 800 | 50.8430 | 1.0 |
52.8858 | 0.9351 | 900 | 47.6484 | 1.0 |
47.363 | 1.0390 | 1000 | 44.8001 | 1.0 |
47.1688 | 1.1429 | 1100 | 42.1867 | 1.0 |
44.9003 | 1.2468 | 1200 | 39.8359 | 1.0 |
41.8644 | 1.3506 | 1300 | 37.6657 | 1.0 |
39.7981 | 1.4545 | 1400 | 35.5738 | 1.0 |
36.9159 | 1.5584 | 1500 | 33.6546 | 1.0 |
34.9118 | 1.6623 | 1600 | 31.8388 | 1.0 |
33.5765 | 1.7662 | 1700 | 30.1401 | 1.0 |
31.0713 | 1.8701 | 1800 | 28.5309 | 1.0 |
30.3748 | 1.9740 | 1900 | 27.0503 | 1.0 |
29.2488 | 2.0779 | 2000 | 25.6837 | 1.0 |
26.2383 | 2.1818 | 2100 | 24.4220 | 1.0 |
25.098 | 2.2857 | 2200 | 23.2505 | 1.0 |
25.1944 | 2.3896 | 2300 | 22.1770 | 1.0 |
24.8906 | 2.4935 | 2400 | 21.2028 | 1.0 |
22.4045 | 2.5974 | 2500 | 20.3080 | 1.0 |
20.9461 | 2.7013 | 2600 | 19.4800 | 1.0 |
20.5321 | 2.8052 | 2700 | 18.7341 | 1.0 |
19.8295 | 2.9091 | 2800 | 18.0642 | 1.0 |
19.3534 | 3.0130 | 2900 | 17.4707 | 1.0 |
18.5557 | 3.1169 | 3000 | 16.9370 | 1.0 |
18.1417 | 3.2208 | 3100 | 16.4655 | 1.0 |
17.497 | 3.3247 | 3200 | 16.0514 | 1.0 |
16.9942 | 3.4286 | 3300 | 15.6962 | 1.0 |
17.0943 | 3.5325 | 3400 | 15.3935 | 1.0 |
16.4386 | 3.6364 | 3500 | 15.1432 | 1.0 |
15.6441 | 3.7403 | 3600 | 14.9392 | 1.0 |
16.0257 | 3.8442 | 3700 | 14.7813 | 1.0 |
15.725 | 3.9481 | 3800 | 14.6680 | 1.0 |
16.0983 | 4.0519 | 3900 | 14.5990 | 1.0 |
15.3737 | 4.1558 | 4000 | 14.5731 | 1.0 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
- Downloads last month
- 105
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for dongim04/wav2vec-large-zh
Base model
facebook/wav2vec2-large-xlsr-53