Update README.md
Browse files
README.md
CHANGED
@@ -6,47 +6,30 @@ tags:
|
|
6 |
license: "apache-2.0"
|
7 |
---
|
8 |
|
9 |
-
# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech
|
10 |
|
11 |
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
|
16 |
-
|
17 |
## Model and data descriptions
|
18 |
|
19 |
-
|
20 |
-
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
|
21 |
|
22 |
-
- [wav2vec2-FR-
|
23 |
-
- [wav2vec2-FR-
|
24 |
-
- [wav2vec2-FR-
|
25 |
-
- [wav2vec2-FR-
|
26 |
-
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
|
27 |
-
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
|
28 |
-
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
|
29 |
|
30 |
## Intended uses & limitations
|
31 |
|
32 |
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
|
33 |
|
34 |
-
##
|
35 |
-
|
36 |
-
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
|
37 |
-
|
38 |
-
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
|
39 |
-
|
40 |
-
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
|
41 |
-
|
42 |
-
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
|
43 |
-
|
44 |
-
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
|
45 |
-
|
46 |
-
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
|
47 |
-
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
|
48 |
|
49 |
-
|
50 |
|
51 |
## Referencing LeBenchmark
|
52 |
|
@@ -57,5 +40,4 @@ While it currently is in beta, SpeechBrain offers two different ways of nicely i
|
|
57 |
journal={ArXiv},
|
58 |
year={2021},
|
59 |
volume={abs/2104.11462}
|
60 |
-
}
|
61 |
-
```
|
|
|
6 |
license: "apache-2.0"
|
7 |
---
|
8 |
|
9 |
+
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
|
10 |
|
11 |
|
12 |
+
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
|
13 |
|
14 |
+
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems]()
|
15 |
|
16 |
|
|
|
17 |
## Model and data descriptions
|
18 |
|
19 |
+
We release four gender-specific models trained on 1K hours of speech.
|
|
|
20 |
|
21 |
+
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
|
22 |
+
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
|
23 |
+
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
|
24 |
+
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
|
|
|
|
|
|
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
|
29 |
|
30 |
+
## Referencing our gender-specific models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
+
<soon to be added>
|
33 |
|
34 |
## Referencing LeBenchmark
|
35 |
|
|
|
40 |
journal={ArXiv},
|
41 |
year={2021},
|
42 |
volume={abs/2104.11462}
|
43 |
+
}
|
|