File size: 1,878 Bytes
702a4b6
615331e
96af241
5b25247
 
625c755
 
 
702a4b6
 
625c755
 
702a4b6
96af241
a0b652d
625c755
 
702a4b6
a0b652d
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
702a4b6
625c755
96af241
 
625c755
 
 
96af241
625c755
 
a0b652d
625c755
 
702a4b6
625c755
702a4b6
625c755
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: xls_1b_decoding_fr_decoding_test
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/danakal/xls_300m_french_data/runs/cf8f5rsv)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/danakal/xls_300m_french_data/runs/cf8f5rsv)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/danakal/xls_300m_french_data/runs/cf8f5rsv)
# xls_1b_decoding_fr_decoding_test

This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- num_epochs: 30
- mixed_precision_training: Native AMP

### Framework versions

- Transformers 4.43.0.dev0
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1