پن کیا گیا ٹویٹ
esha
1.1K posts

esha
@bionicbodhi
founder @ hooked (https://t.co/kq8SvWuIVF)
Bay Area شامل ہوئے Nisan 2008
407 فالونگ19.7K فالوورز

@bionicbodhi It’s fun to have new tools or new updates almost com out every week
It’s just we don’t have enough time to sleep
English
esha ری ٹویٹ کیا
esha ری ٹویٹ کیا

👋Hello from @monster_library. Welcome to Monster Week!🎉 For the next 7 days, we're taking you on a journey through creativity, imagination, & pure magic ✨. Join us as we dive deep into world-building and storytelling, leading up to an exciting new short story reveal this weekend created by you! 📖🚀
English
esha ری ٹویٹ کیا

Made it just in for the madness that is #GEN48 by @runwayml: 3,000 teams, 48 hours, 300,000 credits, drama, no sleep, creativity overdrive. Like a chef creating a dish and asking the ingredients what they want to become. Watch my entry "Breathkeeper" here: youtu.be/dTMPUULbpus

YouTube

English
esha ری ٹویٹ کیا
esha ری ٹویٹ کیا

esha ری ٹویٹ کیا

Do I need to say how much fun I have with what I do?
Midjourney + Luma + Suno
Gizem Akdag@gizakdag
Loving the vibes with this one: --sref 529796788
English
esha ری ٹویٹ کیا

Loopy
Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency
paper page: huggingface.co/papers/2409.02…
With the introduction of diffusion-based video generation techniques, audio-conditioned human video generation has recently achieved significant breakthroughs in both the naturalness of motion and the synthesis of portrait details. Due to the limited control of audio signals in driving human motion, existing methods often add auxiliary spatial signals to stabilize movements, which may compromise the naturalness and freedom of motion. In this paper, we propose an end-to-end audio-only conditioned video diffusion model named Loopy. Specifically, we designed an inter- and intra-clip temporal module and an audio-to-latents module, enabling the model to leverage long-term motion information from the data to learn natural motion patterns and improving audio-portrait movement correlation. This method removes the need for manually specified spatial motion templates used in existing methods to constrain motion during inference. Extensive experiments show that Loopy outperforms recent audio-driven portrait diffusion models, delivering more lifelike and high-quality results across various scenarios.
English

I wrote a love story. “Praying for A Soul That Better’s Mine” ..it’s a performance art piece in AI mode so than a music video. It’s deep and has personal meaning because it’s about my and my fiance’ @chariskm love story, it’s for her ❤️ Outside my series and films, my work is gonna be a lot different now that I feel I can do a lot more using everything I’ve learned. I have a side to my storytelling I haven’t even shown you yet, i’m excited for Fall and Winter 🙃
This is meant to feel good and make you feel love and remember we’re never in this strange world alone. Every one of us is human, meaning every one of us is family. ❤️
English

@machine_mythos I'd love to chat with you about working on a project together. Please let me know the best way to message you?
English











