Ihtesham Ali@ihtesham2005
I set this up at 4am and spent an hour just talking to it.
It's called Open-LLM-VTuber. You get a Live2D animated AI companion that runs completely offline, sees your screen, hears your voice, and never forgets your conversations.
The voice interruption system is different from anything I've seen. The AI cannot hear its own TTS output so there is zero feedback loop and zero awkward pauses. It feels like a real conversation.
The inner thoughts feature floored me. You see what the AI is thinking as a separate text layer that never gets spoken. You watch the reasoning happen in real time before the words come out.
Pet mode puts the avatar on your desktop as a transparent overlay that floats above every window without blocking anything. Drag it anywhere. It follows you.
The persona is entirely yours. Import any Live2D model. Write any system prompt. Clone any voice. Swap the entire LLM backend from Ollama to Claude to DeepSeek in a single config line.
100,000+ conversations have already happened inside this repo according to the user reviews.
That number is going to keep moving.
github.com/Open-LLM-VTube…
6.1K stars. MIT License. 100% Opensource.
What would you use this for... co-working, learning, coding, or just chaos?