patrickvonplaten
commited on
Commit
·
784e4db
1
Parent(s):
63b525d
Update README.md
Browse files
README.md
CHANGED
@@ -43,17 +43,13 @@ model-index:
|
|
43 |
|
44 |
# Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
**Abstract**
|
55 |
-
|
56 |
-
...
|
57 |
|
58 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
59 |
|
|
|
43 |
|
44 |
# Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings
|
45 |
|
46 |
+
Wav2Vec2 Conformer with rotary position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
|
47 |
|
48 |
+
[fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
|
49 |
|
50 |
+
Authors: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
|
51 |
|
52 |
+
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
|
|
|
|
|
|
|
|
|
53 |
|
54 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
55 |
|