Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,174 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-sa-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
+
---
|
4 |
+
<style>
|
5 |
+
table {
|
6 |
+
border-collapse: collapse;
|
7 |
+
width: 100%;
|
8 |
+
margin-bottom: 20px;
|
9 |
+
}
|
10 |
+
th, td {
|
11 |
+
border: 1px solid #ddd;
|
12 |
+
padding: 8px;
|
13 |
+
text-align: center;
|
14 |
+
}
|
15 |
+
.best {
|
16 |
+
font-weight: bold;
|
17 |
+
text-decoration: underline;
|
18 |
+
}
|
19 |
+
.box {
|
20 |
+
text-align: center;
|
21 |
+
margin: 20px auto;
|
22 |
+
padding: 30px;
|
23 |
+
box-shadow: 0px 0px 20px 10px rgba(0, 0, 0, 0.05), 0px 1px 3px 10px rgba(255, 255, 255, 0.05);
|
24 |
+
border-radius: 10px;
|
25 |
+
}
|
26 |
+
.badges {
|
27 |
+
display: flex;
|
28 |
+
justify-content: center;
|
29 |
+
gap: 10px;
|
30 |
+
flex-wrap: wrap;
|
31 |
+
margin-top: 10px;
|
32 |
+
}
|
33 |
+
.badge {
|
34 |
+
text-decoration: none;
|
35 |
+
display: inline-block;
|
36 |
+
padding: 4px 8px;
|
37 |
+
border-radius: 5px;
|
38 |
+
color: #fff;
|
39 |
+
font-size: 12px;
|
40 |
+
font-weight: bold;
|
41 |
+
width: 200px;
|
42 |
+
}
|
43 |
+
.badge-hf-blue {
|
44 |
+
background-color: #6B7280;
|
45 |
+
}
|
46 |
+
.badge-hf-pink {
|
47 |
+
background-color: #7b768a;
|
48 |
+
}
|
49 |
+
.badge-github {
|
50 |
+
background-color: #2c2b2b;
|
51 |
+
}
|
52 |
+
</style>
|
53 |
+
|
54 |
+
<div class="box">
|
55 |
+
<div style="margin-bottom: 20px;">
|
56 |
+
<h2 style="margin-bottom: 4px; margin-top: 0px;">OuteAI</h2>
|
57 |
+
<a href="https://www.outeai.com/" target="_blank" style="margin-right: 10px;">🌐 OuteAI.com</a>
|
58 |
+
<a href="https://discord.gg/vyBM87kAmf" target="_blank" style="margin-right: 10px;">💬 Join our Discord</a>
|
59 |
+
<a href="https://x.com/OuteAI" target="_blank">𝕏 @OuteAI</a>
|
60 |
+
</div>
|
61 |
+
<div class="badges">
|
62 |
+
<a href="https://huggingface.co/OuteAI/OuteTTS-0.3-1B" target="_blank" class="badge badge-hf-blue">HF - OuteTTS 0.3 1B</a>
|
63 |
+
<a href="https://huggingface.co/OuteAI/OuteTTS-0.3-1B-GGUF" target="_blank" class="badge badge-hf-blue">HF - OuteTTS 0.3 1B GGUF</a>
|
64 |
+
<a href="https://huggingface.co/OuteAI/OuteTTS-0.3-500M" target="_blank" class="badge badge-hf-blue">HF - OuteTTS 0.3 500M</a>
|
65 |
+
<a href="https://huggingface.co/OuteAI/OuteTTS-0.3-500M-GGUF" target="_blank" class="badge badge-hf-blue">HF - OuteTTS 0.3 500M GGUF</a>
|
66 |
+
<a href="https://github.com/edwko/OuteTTS" target="_blank" class="badge badge-github">GitHub - OuteTTS</a>
|
67 |
+
</div>
|
68 |
+
</div>
|
69 |
+
|
70 |
+
# OuteTTS Version 0.3
|
71 |
+
|
72 |
+
OuteTTS version 0.3 introduces multiple model variants tailored for diverse use cases. This release significantly enhances the naturalness and coherence of speech synthesis by adding punctuation support, improving the flow and clarity of generated speech. Additionally, the models were trained on refined and extended datasets, offering broader linguistic coverage. With this version, two new languages, **German (de)** and **French (fr)**, are supported, bringing the total to six languages: **English (en)**, **Japanese (jp)**, **Korean (ko)**, **Chinese (zh)**, **French (fr)**, and **German (de)**.
|
73 |
+
|
74 |
+
Experimental voice control features are also included, though they are in very early stage of development. Due to limited data, these features may produce inconsistent results and might sometimes be ignored by the model.
|
75 |
+
|
76 |
+
Special thanks to **Hugging Face** 🤗 for providing the GPU grant that made training this model possible!
|
77 |
+
|
78 |
+
## Available Variants
|
79 |
+
|
80 |
+
### OuteTTS-0.3-500M
|
81 |
+
- **Base**: Qwen2.5-0.5B (Apache-2.0)
|
82 |
+
- **License**: CC-BY-SA-4.0
|
83 |
+
- **Training**: 10,000 hours of speech audio (~4 billion tokens)
|
84 |
+
- **Supported Languages**: en, jp, ko (small dataset), zh, fr, de
|
85 |
+
|
86 |
+
### OuteTTS-0.3-1B
|
87 |
+
- **Base**: OLMO-1B (Apache-2.0)
|
88 |
+
- **License**: CC-BY-NC-SA-4.0 _(Incorporates the Emilia dataset, for improved quality)_
|
89 |
+
- **Training**: 20,000 hours of speech audio (~8 billion tokens)
|
90 |
+
- **Supported Languages**: en, jp, ko, zh, fr, de
|
91 |
+
|
92 |
+
## Showcase Video
|
93 |
+
<video width="1280" height="720" controls style="box-shadow: 0px 0px 20px 10px rgba(0, 0, 0, 0.05), 0px 1px 3px 10px rgba(255, 255, 255, 0.05);">
|
94 |
+
<source src="https://huggingface.co/OuteAI/OuteTTS-0.3-1B-GGUF/resolve/main/generation_preview.mp4" type="video/mp4">
|
95 |
+
Your browser does not support the video tag.
|
96 |
+
</video>
|
97 |
+
|
98 |
+
---
|
99 |
+
|
100 |
+
## Installation
|
101 |
+
|
102 |
+
Install the OuteTTS package via pip:
|
103 |
+
|
104 |
+
```bash
|
105 |
+
pip install outetts --upgrade
|
106 |
+
```
|
107 |
+
|
108 |
+
## Usage
|
109 |
+
|
110 |
+
### Quick Start: Full Basic Example
|
111 |
+
|
112 |
+
```python
|
113 |
+
import outetts
|
114 |
+
|
115 |
+
# Configure the model
|
116 |
+
model_config = outetts.HFModelConfig_v2(model_path="OuteAI/OuteTTS-0.3-1B")
|
117 |
+
# Initialize the interface
|
118 |
+
interface = outetts.InterfaceHF(model_version="0.3", cfg=model_config)
|
119 |
+
|
120 |
+
# You can create a speaker profile for voice cloning, which is compatible across all backends.
|
121 |
+
# speaker = interface.create_speaker(
|
122 |
+
# audio_path="path/to/audio/file.wav",
|
123 |
+
# transcript=None, # Set to None to use Whisper for transcription
|
124 |
+
# whisper_model="turbo", # Optional: specify Whisper model (default: "turbo")
|
125 |
+
# whisper_device=None, # Optional: specify device for Whisper (default: None)
|
126 |
+
# )
|
127 |
+
# interface.save_speaker(speaker, "speaker.json")
|
128 |
+
# speaker = interface.load_speaker("speaker.json")
|
129 |
+
|
130 |
+
# Print available default speakers
|
131 |
+
interface.print_default_speakers()
|
132 |
+
# Load a default speaker
|
133 |
+
speaker = interface.load_default_speaker(name="en_male_1")
|
134 |
+
|
135 |
+
# Generate speech
|
136 |
+
output = interface.generate(
|
137 |
+
text="Speech synthesis is the artificial production of human speech.",
|
138 |
+
temperature=0.1,
|
139 |
+
repetition_penalty=1.1,
|
140 |
+
max_length=4096,
|
141 |
+
speaker=speaker
|
142 |
+
)
|
143 |
+
|
144 |
+
# Save the generated speech to a file
|
145 |
+
output.save("output.wav")
|
146 |
+
```
|
147 |
+
> [!IMPORTANT]
|
148 |
+
> ## For additional usage examples and recommendations, visit the [GitHub repository](https://github.com/edwko/OuteTTS?tab=readme-ov-file#usage).
|
149 |
+
|
150 |
+
---
|
151 |
+
|
152 |
+
## Dataset Attribution
|
153 |
+
|
154 |
+
The OuteTTS-0.3-1B training data incorporates various publicly available speech datasets. Below is a summary of the key data sources:
|
155 |
+
|
156 |
+
- **Emilia Dataset**: CC-BY-NC 4.0
|
157 |
+
- **Mozilla Common Voice**: CC-0
|
158 |
+
- **MLCommons People's Speech Dataset (selected portions)**: CC-BY 4.0
|
159 |
+
- **Noisy Speech Database (Edinburgh DataShare)**: CC BY 4.0
|
160 |
+
- **Multilingual LibriSpeech (MLS)**: CC BY 4.0
|
161 |
+
- **CSTR VCTK Corpus (Edinburgh DataShare)**: CC BY 4.0
|
162 |
+
- **THCHS-30 (Open Speech and Language Resources)**: Apache-2.0
|
163 |
+
- **Zeroth-Korean (Open Speech and Language Resources)**: CC BY 4.0
|
164 |
+
- **Aishell (Open Speech and Language Resources)**: Apache-2.0
|
165 |
+
- **Other permissively licensed datasets**
|
166 |
+
|
167 |
+
## Credits & References
|
168 |
+
|
169 |
+
Special acknowledgment to the open-source community and researchers for their valuable contributions.
|
170 |
+
|
171 |
+
- [WavTokenizer GitHub](https://github.com/jishengpeng/WavTokenizer) | [WavTokenizer HF](https://huggingface.co/novateur/WavTokenizer-large-speech-75token)
|
172 |
+
- [CTC Forced Alignment](https://pytorch.org/audio/stable/tutorials/ctc_forced_alignment_api_tutorial.html)
|
173 |
+
- [Qwen-2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B)
|
174 |
+
- [OLMo-1B](https://huggingface.co/allenai/OLMo-1B-hf)
|