Update README.md
Browse files
README.md
CHANGED
@@ -4,9 +4,57 @@ task: audio-to-audio
|
|
4 |
tags:
|
5 |
- fairseq
|
6 |
- audio
|
7 |
-
-
|
8 |
-
language: en
|
9 |
datasets:
|
10 |
-
-
|
|
|
|
|
|
|
11 |
---
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
tags:
|
5 |
- fairseq
|
6 |
- audio
|
7 |
+
- speech-to-speech-translation
|
8 |
+
language: en-de
|
9 |
datasets:
|
10 |
+
- mtedx
|
11 |
+
- covost2
|
12 |
+
- europarl_st
|
13 |
+
- voxpopuli
|
14 |
---
|
15 |
+
# xm_transformer_600m-es_en-multi_domain
|
16 |
+
|
17 |
+
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
|
18 |
+
- Spanish-English
|
19 |
+
- Trained on mTEDx, CoVoST 2, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
|
20 |
+
- Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech)
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
```python
|
24 |
+
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
|
25 |
+
from fairseq.models.text_to_speech.hub_interface import S2THubInterface
|
26 |
+
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
|
27 |
+
import IPython.display as ipd
|
28 |
+
import torchaudio
|
29 |
+
|
30 |
+
|
31 |
+
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
|
32 |
+
"facebook/xm_transformer_600m-es_en-multi_domain",
|
33 |
+
arg_overrides={"config_yaml": "config.yaml"},
|
34 |
+
)
|
35 |
+
model = models[0]
|
36 |
+
generator = task.build_generator(model, cfg)
|
37 |
+
|
38 |
+
|
39 |
+
# requires 16000Hz mono channel audio
|
40 |
+
audio, _ = torchaudio.load("/path/to/an/audio/file")
|
41 |
+
|
42 |
+
sample = S2THubInterface.get_model_input(task, audio)
|
43 |
+
text = S2THubInterface.get_prediction(task, model, generator, sample)
|
44 |
+
|
45 |
+
# speech synthesis
|
46 |
+
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
|
47 |
+
f"facebook/fastspeech2-en-ljspeech",
|
48 |
+
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
|
49 |
+
)
|
50 |
+
tts_model = tts_models[0]
|
51 |
+
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
|
52 |
+
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
|
53 |
+
|
54 |
+
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
|
55 |
+
wav, sr = TTSHubInterface.get_prediction(
|
56 |
+
tts_task, tts_model, tts_generator, tts_sample
|
57 |
+
)
|
58 |
+
|
59 |
+
ipd.Audio(wav, rate=rate)
|
60 |
+
```
|