
Josh
346 posts

Josh
@JoshDev_Anime
That AI Guy🎗Content Creator🎗Animator. ✨ Grab my free ebooks on my linktree





Boba Anime 1.4 Video Model is here! 1.4 comes with significant upgrades 🎬 in Motion, Dialogue, and Consistent Characters. It has taken a huge effort to make the leap from 1.0 to 1.4, and we couldn’t be more proud of the results. Try it now: boba.video


We’re excited to announce the release and open-source of HunyuanImage 3.0 — the largest and most powerful open-source text-to-image model to date, with over 80 billion total parameters, of which 13 billion are activated per token during inference.The effect is completely comparable to the industry’s flagship closed-source model.🚀🚀🚀 HunyuanImage 3.0 originates from our internally developed native multimodal large language model, with fine-tuning and post-training focused on text-to-image generation. This unique foundation gives the model a powerful set of capabilities: ✅Reason with world knowledge ✅Understand complex, thousand-word prompts ✅Generate precise text within images Different from traditional DiT architecture image generation models, HunyuanImage 3.0’s MoE architecture uses a Transfusion-based approach to deeply couple Diffusion and LLM training for a single, powerful system. Built on Hunyuan-A13B, HunyuanImage 3.0 was trained on a massive dataset: 5 billion image-text pairs, video frames, interleaved image-text data, and 6 trillion tokens of text corpora. This hybrid training across multimodal generation, understanding, and LLM capabilities allows the model to seamlessly integrate multiple tasks. Whether you're an illustrator, designer, or creator, this is built to slash your workflow from hours to minutes. HunyuanImage 3.0 can generate intricate text, detailed comics, expressive emojis, and lively, engaging illustrations for educational content. The current release focuses solely on text-to-image generation and future updates will include image-to-image, image editing, multi-turn interaction, and more. 👉🏻Try it now: hunyuan.tencent.com/image 🔗GitHub: github.com/Tencent-Hunyua… 🤗Hugging Face: huggingface.co/tencent/Hunyua…


🔥 Qwen-Image-Edit-2509 IS LIVE — and it’s a GAME CHANGER. 🔥 We didn’t just upgrade it. We rebuilt it for creators, designers, and AI tinkerers who demand pixel-perfect control. ✅ Multi-Image Editing? YES. Drag in “person + product” or “person + scene” — it blends them like magic. No more Franken-images. ✅ Single-Image? Rock-Solid Consistency. • 👤 Faces stay you — through poses, filters, and wild styles. • 🛍️ Products keep their identity — ideal for ads & posters. • ✍️ Text? Edit everything: content, font, color, even material texture. ✅ ControlNet Built-In. Depth. Edges. Keypoints. Plug & play precision. ✨ Blog: qwen.ai/blog?id=7a9009… 💬 QwenChat: chat.qwen.ai/?inputFeature=… 🐙 GitHub: github.com/QwenLM/Qwen-Im… 🤗 HuggingFace: huggingface.co/Qwen/Qwen-Imag… 🧩 ModelScope: modelscope.cn/models/Qwen/Qw…

🚀 WAN 2.5 · Global Debut on WaveSpeedAI! Multilingual, cinematic, and fully audio-synced — the next-level AI video model is finally here. 🔥 See it in action and experience AI video like never before! Try Wan 2.5 now! 🔗 wavespeed.ai/collections/wa… 🔗 blog: wavespeed.ai/blog/posts/The… #WAN2_5 #AI #VideoRevolution #WaveSpeedAI #AIVideo #MultilingualAI #CinematicAI #LipSyncAI #DigitalCreativity #NextGenAI #ContentCreation #VideoAI #AIInnovation

Get spatial and character consistency by combining AI with 3D. Learn to create your avatar and generate quality clips with built-in prompts and tools, in this thread 🧵👇



