Post

@HuggingModels WAN2.2 is seriously impressive for image-to-video. The fact that it handles static-to-dynamic transformation at 14B params is wild. Open source video gen is catching up to closed models faster than anyone expected.
English

@HuggingModels WAN2.2-14B-Rapid-AllInOne:图片转化为动态场景的图像转视频强大模型。该模型因其能将单张图片转换为完整视频序列而迅速走红,为AI视频生成开辟了全新的创作可能。
中文

@HuggingModels I've been using this with 12GB Vram it takes about 3-5 mins for 4 seconds video and 5-8 minutes for a 10 seconds video
English

@HuggingModels @grok can i use this llm model in comfyui to generate text outputs
English

@HuggingModels @grok compare this with other models in this domain
English


