Skip to content

edwko/OuteTTS

Repository files navigation

OuteTTS

HuggingFace HuggingFace HuggingFace HuggingFace PyPI npm

🤗 Hugging Face | 💬 Discord | 𝕏 X (Twitter) | 🌐 Website | 📰 Blog

OuteTTS is an experimental text-to-speech model that uses a pure language modeling approach to generate speech, without architectural changes to the foundation model itself.

Compatibility

OuteTTS supports the following backends:

Backend
Hugging Face Transformers
GGUF llama.cpp
ExLlamaV2
Transformers.js

Installation

Python

pip install outetts

Important:

Node.js / Browser

npm i outetts

Usage

Interfaces

outetts package provide two interfaces for OuteTTS with support for different models:

Interface Supported Models Documentation
Interface v1 OuteTTS-0.2, OuteTTS-0.1 View Documentation
Interface v2 OuteTTS-0.3 View Documentation

Generation Performance: The model performs best with 30-second generation batches. This window is reduced based on the length of your speaker samples. For example, if the speaker reference sample is 10 seconds, the effective window becomes approximately 20 seconds.

Speaker Profile Recommendations

To achieve the best results when creating a speaker profile, consider the following recommendations:

  1. Audio Clip Duration:

    • Use an audio clip of around 10 seconds.
    • This duration provides sufficient data for the model to learn the speaker's characteristics while keeping the input manageable.
  2. Audio Quality:

    • Ensure the audio is clear and noise-free. Background noise or distortions can reduce the model's ability to extract accurate voice features.
  3. Speaker Familiarity:

    • The model performs best with voices that are similar to those seen during training. Using a voice that is significantly different from typical training samples (e.g., unique accents, rare vocal characteristics) might result in inaccurate replication.
    • In such cases, you may need to fine-tune the model specifically on your target speaker's voice to achieve a better representation.
  4. Parameter Adjustments:

    • Adjust parameters like temperature in the generate function to refine the expressive quality and consistency of the synthesized voice.

Credits