File size: 7,616 Bytes
0a92d43
 
ae0577d
0a92d43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
772cc32
fb05b35
 
 
 
 
 
 
0a92d43
 
 
 
 
f73cd78
 
 
 
0a92d43
 
fb05b35
 
 
 
 
772cc32
0a92d43
 
 
 
 
2bb65ec
 
 
 
0a92d43
6f115ad
 
0a92d43
 
 
 
 
 
 
 
2afa405
0a92d43
 
 
 
6b226eb
2afa405
0a92d43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d06a87
 
ede6377
5d06a87
0a92d43
 
 
 
3074884
0a92d43
 
 
 
 
 
 
 
 
5d06a87
0a92d43
dbfae64
0a92d43
 
5d06a87
0a92d43
5d06a87
0a92d43
 
 
 
3074884
 
 
 
 
 
 
0a92d43
3074884
0a92d43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
---
license: cc-by-sa-4.0
pipeline_tag: text-to-speech
---
<style>
table {
    border-collapse: collapse;
    width: 100%;
    margin-bottom: 20px;
}
th, td {
    border: 1px solid #ddd;
    padding: 8px;
    text-align: center;
}
.best {
    font-weight: bold;
    text-decoration: underline;
}
.box {
  text-align: center;
  margin: 20px auto;
  padding: 30px;
  box-shadow: 0px 0px 20px 10px rgba(0, 0, 0, 0.05), 0px 1px 3px 10px rgba(255, 255, 255, 0.05);
  border-radius: 10px;
}
.badges {
    display: flex;
    justify-content: center;
    gap: 10px;
    flex-wrap: wrap;
    margin-top: 10px;
}
.badge {
    text-decoration: none;
    display: inline-block;
    padding: 4px 8px;
    border-radius: 5px;
    color: #fff;
    font-size: 12px;
    font-weight: bold;
    width: 200px;
}
.badge-dark {
    background-color: #000000;
}
.badge-model {
    background-color: #6885ab;
}
.badge-space {
    background-color: #7468ab;
}
</style>

<div class="box">
  <div style="margin-bottom: 20px;">
    <h2 style="margin-bottom: 4px; margin-top: 0px;">Oute <em>A I</em></h2>
    <a href="https://www.outeai.com/" target="_blank" style="margin-right: 10px; font-weight: bold;">🌐 OuteAI.com</a> 
    <a href="https://discord.gg/vyBM87kAmf" target="_blank" style="margin-right: 10px; font-weight: bold;">💬 Join our Discord</a>
    <a href="https://x.com/OuteAI" target="_blank" style="font-weight: bold;">𝕏 @OuteAI</a>
  </div>
  <div class="badges">
    <a href="https://huggingface.co/OuteAI/OuteTTS-0.3-1B" target="_blank" class="badge badge-model">OuteTTS 0.3 1B</a>
    <a href="https://huggingface.co/OuteAI/OuteTTS-0.3-1B-GGUF" target="_blank" class="badge badge-model">OuteTTS 0.3 1B GGUF</a>
    <a href="https://huggingface.co/OuteAI/OuteTTS-0.3-500M" target="_blank" class="badge badge-model">OuteTTS 0.3 500M</a>
    <a href="https://huggingface.co/OuteAI/OuteTTS-0.3-500M-GGUF" target="_blank" class="badge badge-model">OuteTTS 0.3 500M GGUF</a>
    <a href="https://huggingface.co/spaces/OuteAI/OuteTTS-0.3-1B-Demo" target="_blank" class="badge badge-space">OuteTTS 0.3 Demo Space</a>
    <a href="https://github.com/edwko/OuteTTS" target="_blank" class="badge badge-dark">GitHub - OuteTTS</a>
  </div>
</div>

# OuteTTS Version 0.3

OuteTTS version 0.3 introduces multiple model variants tailored for diverse use cases. 
This release significantly enhances the naturalness and coherence of speech synthesis by adding punctuation support, improving the flow and clarity of generated speech.
The following punctuation marks are supported: `'.', '!', '?', ',', '"', '„', '¡', '¿', '…', '...', '。', '!', '?', ',', '؟'`. These are converted into special tokens, for instance, `.` is transformed into `<|period|>`.
Additionally, the models were trained on refined and extended datasets, offering broader linguistic coverage. With this version, two new languages, **German (de)** and **French (fr)**, are supported, bringing the total to six languages: **English (en)**, **Japanese (jp)**, **Korean (ko)**, **Chinese (zh)**, **French (fr)**, and **German (de)**.

OuteTTS is a solution designed to extend any existing large language model (LLM) with text-to-speech (TTS) and speech-to-speech capabilities. By preserving the original architecture, it ensures high compatibility with a broad range of libraries and tools, making it easy to integrate speech functionalities without compromising on flexibility.

Experimental voice control features are also included, though they are in very early stage of development. Due to limited data, these features may produce inconsistent results and might sometimes be ignored by the model.

Special thanks to **Hugging Face** 🤗 for providing the GPU grant that made training this model possible!

## Available Variants

### OuteTTS-0.3-500M
- **Base**: Qwen2.5-0.5B (Apache-2.0)
- **TTS Model License**: CC-BY-SA-4.0
- **Training**: 10,000 hours of speech audio (~4 billion tokens)
- **Supported Languages**: en, jp, ko (small dataset), zh, fr, de

### OuteTTS-0.3-1B
- **Base**: OLMo-1B (Apache-2.0)
- **TTS Model License**: CC-BY-NC-SA-4.0 _(Incorporates the Emilia dataset, for improved quality)_
- **Training**: 20,000 hours of speech audio (~8 billion tokens)
- **Supported Languages**: en, jp, ko, zh, fr, de

## Showcase Video
<video width="1280" height="720" controls style="box-shadow: 0px 0px 20px 10px rgba(0, 0, 0, 0.05), 0px 1px 3px 10px rgba(255, 255, 255, 0.05);">
  <source src="https://huggingface.co/OuteAI/OuteTTS-0.3-1B-GGUF/resolve/main/generation_preview.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>

---

## Installation

Install the OuteTTS package via pip:

```bash
pip install outetts --upgrade
```

## Usage

### Quick Start: Full Basic Example

```python
import outetts

# Configure the model
model_config = outetts.HFModelConfig_v2(
    model_path="OuteAI/OuteTTS-0.3-1B",
    tokenizer_path="OuteAI/OuteTTS-0.3-1B"
)
# Initialize the interface
interface = outetts.InterfaceHF(model_version="0.3", cfg=model_config)

# You can create a speaker profile for voice cloning, which is compatible across all backends.
# speaker = interface.create_speaker(audio_path="path/to/audio/file.wav")
# interface.save_speaker(speaker, "speaker.json")
# speaker = interface.load_speaker("speaker.json")

# Print available default speakers
interface.print_default_speakers()
# Load a default speaker
speaker = interface.load_default_speaker(name="en_male_1")

# Generate speech
gen_cfg = outetts.GenerationConfig(
    text="Speech synthesis is the artificial production of human speech.",
    temperature=0.1,
    repetition_penalty=1.1,
    max_length=4096,
    speaker=speaker,
)
output = interface.generate(config=gen_cfg)

# Save the generated speech to a file
output.save("output.wav")
```
### Additional Usage Examples

> [!IMPORTANT]
> For additional usage examples and recommendations, visit the: [GitHub repository](https://github.com/edwko/OuteTTS?tab=readme-ov-file#usage).

### Generation Performance

> [!IMPORTANT]
> The model performs best with 30-second generation batches. This window is reduced based on the length of your speaker samples. For example, if the speaker reference sample is 10 seconds, the effective window becomes approximately 20 seconds. I am currently working on adding batched generation capabilities to the library, along with further improvements that are not yet implemented.

---

## Dataset Attribution

The OuteTTS-0.3-500M training data incorporates various publicly available speech datasets. Below is a summary of the key data sources:

- **Mozilla Common Voice**: CC-0
- **MLCommons People's Speech Dataset (selected portions)**: CC-BY 4.0
- **Noisy Speech Database (Edinburgh DataShare)**: CC BY 4.0
- **Multilingual LibriSpeech (MLS)**: CC BY 4.0
- **CSTR VCTK Corpus (Edinburgh DataShare)**: CC BY 4.0
- **THCHS-30 (Open Speech and Language Resources)**: Apache-2.0
- **Zeroth-Korean (Open Speech and Language Resources)**: CC BY 4.0
- **Aishell (Open Speech and Language Resources)**: Apache-2.0
- **Other permissively licensed datasets**

## Credits & References

Special acknowledgment to the open-source community and researchers for their valuable contributions.

- [WavTokenizer GitHub](https://github.com/jishengpeng/WavTokenizer) | [WavTokenizer HF](https://huggingface.co/novateur/WavTokenizer-large-speech-75token)
- [CTC Forced Alignment](https://pytorch.org/audio/stable/tutorials/ctc_forced_alignment_api_tutorial.html)
- [Qwen-2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B)
- [OLMo-1B](https://huggingface.co/allenai/OLMo-1B-hf)