
Hugging Face
13.1K posts

Hugging Face
@huggingface
The AI community building the future. https://t.co/TpiXQMQ9rZ


was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.





Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏




We don’t generate videos. 🎬 We generate worlds from videos. 🌍 Introducing InSpatio-World — the world's first open-source real-time 4D world model‼️ Your input: a video clip Our output: a dynamic, navigable, persistent world 🕹️ explore freely across viewpoints ⏪ control time forward and backward 🔓 open-source and ready to build on :) Live demo: 🔗 world.inspatio.com Code & weights: 🔗 github.com/inspatio/inspa… Project page: 🔗 inspatio.github.io/inspatio-world













