IF

7.7K posts

IF banner
IF

IF

@ImpactFramesX

شامل ہوئے Ağustos 2020
512 فالونگ1.2K فالوورز
IF ری ٹویٹ کیا
Eren Chen
Eren Chen@ErenChenAI·
The race ended before it got even started for this robot :(
English
130
101
1.3K
202.5K
IF ری ٹویٹ کیا
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research. Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over time, causing objects to shift, blur, or appear inconsistent. This prevents them from creating the reliable 3D environments required for downstream simulations. Lyra 2.0 solves these issues by: ✅ Maintaining per-frame 3D geometry to retrieve past frames and establish spatial correspondences ✅ Using self-augmented training to correct its own temporal drifting. Lyra 2.0 turns an image into a 3D world you can walk through, look back, and drop a robot into for real-time rendering, simulation, and immersive applications. ➡️ Learn more: research.nvidia.com/labs/sil/proje… 📄 Read the paper: arxiv.org/abs/2604.13036
English
95
452
2.8K
382.6K
IF ری ٹویٹ کیا
Tencent HY
Tencent HY@TencentHunyuan·
We’re open-sourcing HY-World 2.0, a multimodal world model that generates, reconstructs, and simulates interactive *3D worlds* from text, images, and videos. Outputs can be integrated into game engines and embodied simulation pipelines. Key highlights: 🔹 One-click world generation Turn text or image into interactive 3D worlds automatically. 🔹 Pipeline-ready 3D outputs Editable 3D worlds for Unity and Unreal Engine, with standard 3D exports including mesh, 3DGS, and point clouds. 🔹 Unified world model system One model family for world generation and reconstruction across synthetic and real-world scenes. 🔹 Interactive character mode Explore generated 3D worlds in real time with physics-aware movement and collision support. ✨ Apply for access: 3d.hunyuan.tencent.com/sceneTo3D 🔗 GitHub: github.com/Tencent-Hunyua… 🤗 Hugging Face: huggingface.co/tencent/HY-Wor… 📄 Technical Report: 3d-models.hunyuan.tencent.com/world/world2_0…
English
69
357
2.1K
281.2K
IF ری ٹویٹ کیا
852話(hakoniwa)
岩倉玲音 serial experiments lain fanart 居そうなlainさん
日本語
13
532
3.1K
89.6K
RosarySon
RosarySon@SkyVirginSon·
Skip If You Hate Jesus Christ Put AMEN If Jesus Christ Is Your Lord And Savior!
English
2.7K
838
15.3K
179.1K
IF
IF@ImpactFramesX·
@sunbaolong_2001 powerful hack the comfy community will be thankful
English
1
0
0
9
IF ری ٹویٹ کیا
有趣的80后程序员
有趣的80后程序员@sunbaolong_2001·
LTX 2.3 outpainting is powerful, but it breaks easily if you use the wrong masks or frame placement. I found a ComfyUI hack: Hijack the VACE Outpaint node (made for Wan 2.2), convert the mask to BLACK (not grey!), and fix the mutation issues permanently. ⬛🛠️ Full Masterclass & Workflows here: 👇 🔗youtu.be/P6N8d8fdfUA 中文版 youtu.be/Scemn9Mvh7k #ComfyUITutorial #ltx23 #runninghub @RunningHub_ai
YouTube video
YouTube
YouTube video
YouTube
English
2
1
27
590
IF ری ٹویٹ کیا
left curve dev
left curve dev@leftcurvedev_·
🚨 Massive news for local AI today! @huggingface released Kernels hub huggingface.co/kernels Right now, when you run a model locally (via Ollama, LM Studio, llama.cpp, vLLM…), you’re mostly stuck with whatever generic kernels PyTorch or the framework ships with. Those are “one-size-fits-all”, they work on most GPUs but aren’t *perfectly* optimized for your specific card, driver version, PyTorch build, or CUDA setup. It changes the game because anyone can now upload highly optimized GPU kernels to the Hub, just like uploading a model. These kernels come pre-compiled for exact combos (e.g., RTX 4090 + PyTorch 2.6 + CUDA 12.4 + Windows/Linux)and your local tools can automatically download + load the best kernel for your hardware with zero compilation hell. It has a real impact on local AI users: - Faster inference: Real reports of 1.7x to 2.5x speedups on the same model/hardware. That means higher tokens/second, smoother 70B+ runs, better context, less stuttering. - Lower VRAM/Power consumption: Optimized kernels often use memory and power smarter. - Easier for consumer GPUs: 4090, 5090, 3090, even 4060/AMD users get expert-level optimizations without needing to be a CUDA wizard. - Less jank: No more “works on my machine but doesn’t work on yours”. Expect a flood of community kernels for popular models (Llama, Qwen, DeepSeek, Mistral, etc.). The best ones will bubble up in the benchmarks! @ClementDelangue and the team have been cooking lately 🤗
clem 🤗@ClementDelangue

Introducing Kernels on the Hugging Face Hub ✨ What if shipping a GPU kernel was as easy as pushing a model? - Pre-compiled for your exact GPU, PyTorch & OS - Multiple kernel versions coexist in one process - torch.compile compatible - 1.7x–2.5x speedups over PyTorch baselines

English
4
38
381
35.5K
IF ری ٹویٹ کیا
ComfyUI
ComfyUI@ComfyUI·
Seedance 2.0 is now live in ComfyUI for everyone. This state-of-the-art model brings us one step closer to production-quality video generations.
English
24
62
593
59.5K
Mr. Nobody
Mr. Nobody@MmisterNobody·
This one was hard to watch. Christina Koch forgot her lines and froze on stage 😬 These actors need to research their lines more.
English
1.3K
3.7K
16.5K
833.9K
IF ری ٹویٹ کیا
Become A Saint
Become A Saint@BeSaintly·
This is a map of the soul using a Seraphim. Absolutely fascinating video:
English
24
482
2.9K
64K
IF ری ٹویٹ کیا
abymael
abymael@abymaelx·
"Coisas que aprendi nesses 10 anos de UTI foi a mãozinha. É uma luvinha em cima e outa embaixo e aí a gente amarra para que o paciente sempre sinta que tem alguém segurando a mão dele. E acreditem ou não, os sinais vitais, como pressão, sempre ficam bons"
Português
159
3.5K
53.6K
3M
IF ری ٹویٹ کیا
jtydhr88
jtydhr88@jtydhr88·
It’s time to bring real camera motion into ComfyUI. Continuing to develop the ComfyUI Mesh2Motion plugin — I’ve finally added camera presets this time: 100+ cinematic camera presets, with video recording and output. Now the question is: should I integrate this into the native 3D nodes?
English
8
25
210
10.4K
IF ری ٹویٹ کیا
Pietro Baudin
Pietro Baudin@pietrobaudin·
My toxic trait is assuming I can emulate this with less that 10 minutes of simulations by yuanzhengflowerarrangement
English
12
124
2.5K
807.2K
IF ری ٹویٹ کیا
Andac Guven
Andac Guven@andacgvn·
Minimax Bombayı patlattı! Bu son zamanlarda gördüğüm gerçekten en yenilikçi araç. Agentlar bazı görevleri yapabiliyordu fakat bunlar genellikle yazılı işlerle sınırlıydı. MMX-CLI agentlara konuşma, resim yapma, video oluşturma, hatta müzik besteleme yeteneğini kazandırdı. Mevcut token plan aboneleri hali hazırda kullanabilir. Multi modal yapıda çalışabilecek bir agent bir çok yeni aracın yapımı kolaylaştırabilir. Xiaomi aboneliği düşünüyordum ama Minimax bu araçlar gerçekten aklımı çeldi. Var mı hali hazırda Minimax token plan abonesi? hız ve kesinti konusundaki durumunu paylaşabilirse karar vermemize yardımcı olur.
Andac Guven tweet media
MiniMax (official)@MiniMax_AI

Introducing MMX-CLI — our first piece of infrastructure built not for humans, but for Agents. Your Agent can read, think, and write. But ask it to sing, paint, or show you a world it's never seen — and it falls silent. Not because it doesn't understand, but because it has no mouth, no hands, no camera. Today, that changes. MMX-CLI gives every Agent seven new senses — image, video, voice, music, vision, search, conversation — powered by MiniMax's full-modal stack, today's SOTA across mainstream omni-modal models. One command: mmxAgent-native I/O. Zero MCP glue. Runs on your existing Token Plan. Two lines to give your Agent a voice: npx skills add MiniMax-AI/cli -y -g npm install -g mmx-cli Then tell it: "you have mmx commands available." It'll learn the rest. Github → github.com/MiniMax-AI/cli Token Plan: platform.minimax.io/subscribe/toke…

Türkçe
23
33
390
43.5K
IF ری ٹویٹ کیا