Add voice to your Ollama model. Supports real-time speech generation and streaming output from your LLM.
Currently supports MeloTTS for speech generation and Ollama for LLM inference.
- Real-time TTS
- Streaming output from LLM
- Ability to switch between different TTS engines such as Tortoise, Coqui, or ElevenLabs
- Easy-to-install Docker container