|
---
|
|
license: other
|
|
license_name: coqui-public-model-license
|
|
license_link: https://coqui.ai/cpml
|
|
library_name: coqui
|
|
pipeline_tag: text-to-speech
|
|
widget:
|
|
- text: "Once when I was six years old I saw a magnificent picture"
|
|
---
|
|
|
|
# ⓍTTS_v2 - C-3PO Fine-Tuned Voice Model (Borcherding/XTTS-v2_C3PO)
|
|
Artistic Whimsy and Galactic Musings
|
|
The ⓍTTS (Satirical Text-to-Speech) model, residing within the Borcherding/XTTS-v2_C3PO repository, transcends mere technology. It becomes an art piece—an interplay of code, creativity, and humor. Imagine a digital gallery where visitors encounter C-3PO’s satirical musings echoing through the virtual halls.
|
|
|
|
Key Features
|
|
C-3PO’s Quirky Voice: Leveraging 20 unique voice lines sourced from Voicy, the ⓍTTS model captures the essence of C-3PO’s distinctive speech patterns. Expect a delightful blend of protocol droid formality, unexpected commentary, and occasional existential musings.
|
|
Satirical Tone: Rather than adhering to a neutral or serious tone, the ⓍTTS model revels in satire. It playfully exaggerates intonation, injects humorous pauses, and occasionally breaks the fourth wall. Each voice line becomes a brushstroke on the canvas of imagination.
|
|
|
|
This repository hosts a fine-tuned version of the ⓍTTS model, utilizing 20 unique voice lines from C-3PO, the iconic Star Wars character. The voice lines were sourced from [Voicy](https://www.voicy.network/official-soundboards/movies/c3po).
|
|
|
|
![C-3PO](c3po_1.png)
|
|
|
|
Listen to a sample of the ⓍTTS_v2 - C-3PO Fine-Tuned Model:
|
|
|
|
<audio controls>
|
|
<source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/sample_c3po_generated.wav" type="audio/wav">
|
|
Your browser does not support the audio element.
|
|
</audio>
|
|
|
|
Here's a C-3PO mp3 voice line clip from the training data:
|
|
|
|
<audio controls>
|
|
<source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/reference2.mp3" type="audio/wav">
|
|
Your browser does not support the audio element.
|
|
</audio>
|
|
|
|
## Features
|
|
- 🎙️ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
|
|
- 🌍 **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining C-3PO's distinct voice.
|
|
- 😃 **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
|
|
- 🔄 **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
|
|
- 🎧 **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
|
|
|
|
## Supported Languages
|
|
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
|
|
|
|
## Usage in Roll Cage
|
|
🤖💬 Boost your AI experience with this Ollama add-on! Enjoy real-time audio 🎙️ and text 🔍 chats, LaTeX rendering 📜, agent automations ⚙️, workflows 🔄, text-to-image 📝➡️🖼️, image-to-text 🖼️➡️🔤, image-to-video 🖼️➡️🎥 transformations. Fine-tune text 📝, voice 🗣️, and image 🖼️ gens. Includes Windows macro controls 🖥️ and DuckDuckGo search.
|
|
|
|
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
|
|
|
|
## Why This Model for Roll Cage?
|
|
The C-3PO fine-tuned model was designed for the Roll Cage chatbot to enhance user interaction with a familiar and beloved voice. By incorporating C-3PO's distinctive speech patterns and tone, Roll Cage becomes more engaging and entertaining. The addition of multi-lingual support and emotion transfer ensures that the chatbot can communicate effectively and expressively across different languages and contexts, providing a more immersive experience for users.
|
|
|
|
## CoquiTTS and Resources
|
|
- 🐸💬 **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
|
|
- 📚 **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
|
|
- 👩💻 **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
|
|
- 🗯 **Community**: [Discord](https://discord.gg/5eXr5seRrv)
|
|
|
|
## License
|
|
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
|
|
|
|
## Contact
|
|
Join our 🐸Community on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
|
|
|
|
Using 🐸TTS API:
|
|
|
|
```python
|
|
from TTS.api import TTS
|
|
|
|
tts = TTS(model_path="D:/CodingGit_StorageHDD/Ollama_Custom_Mods/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_C3PO/",
|
|
config_path="D:/CodingGit_StorageHDD/Ollama_Custom_Mods/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_C3PO/config.json", progress_bar=False, gpu=True).to(self.device)
|
|
|
|
# generate speech by cloning a voice using default settings
|
|
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
|
|
file_path="output.wav",
|
|
speaker_wav="/path/to/target/speaker.wav",
|
|
language="en")
|
|
|
|
```
|
|
|
|
Using 🐸TTS Command line:
|
|
|
|
```console
|
|
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
|
|
--text "Bugün okula gitmek istemiyorum." \
|
|
--speaker_wav /path/to/target/speaker.wav \
|
|
--language_idx tr \
|
|
--use_cuda true
|
|
```
|
|
|
|
Using the model directly:
|
|
|
|
```python
|
|
from TTS.tts.configs.xtts_config import XttsConfig
|
|
from TTS.tts.models.xtts import Xtts
|
|
|
|
config = XttsConfig()
|
|
config.load_json("/path/to/xtts/config.json")
|
|
model = Xtts.init_from_config(config)
|
|
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
|
|
model.cuda()
|
|
|
|
outputs = model.synthesize(
|
|
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
|
|
config,
|
|
speaker_wav="/data/TTS-public/_refclips/3.wav",
|
|
gpt_cond_len=3,
|
|
language="en",
|
|
)
|
|
```
|
|
|