Hugging Face

13.1K posts

Hugging Face banner
Hugging Face

Hugging Face

@huggingface

The AI community building the future. https://t.co/TpiXQMQ9rZ

NYC and Paris and 🌏 Inscrit le Eylül 2016
220 Abonnements649.5K Abonnés
Hugging Face retweeté
clem 🤗
clem 🤗@ClementDelangue·
Would be so cool if OpenAI open-sourced Sora as they're shutting down the app! Would be an amazing contribution to the field and make all the efforts of the teams working on it even more meaningful!
Sora@soraofficialapp

We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team

English
104
107
1.6K
131.3K
Hugging Face retweeté
Victor M
Victor M@victormustar·
Hugging Face V2 is going to be wild 👀
Lewis Tunstall@_lewtun

You can now pretrain LLMs entirely on the HF Hub 💥 Last week, @OpenAI launched a competition to see who can pretrain the best LLM in under 10 minutes. So over the weekend, I made a little demo to automate this end-to-end using the Hub as the infra layer: - Jobs to scale compute - Buckets to store all experiments - Trackio to log all the metrics The cool thing here is that everything is launched locally: no ssh shenanigans into a cluster or fighting with colleagues over storage and GPUs ⚔️ All that's left is coming up with new ideas, but luckily Codex can automate that part too 😁 Can I have a job now please @reach_vb 🙏?

English
3
25
210
38.1K
Hugging Face retweeté
Bryan Catanzaro
Bryan Catanzaro@ctnzr·
Thank you to everyone in the community who is testing and using Nemotron models. It's great to see Nemotron-Cascade-2, Nemotron-3-Super and Nemotron-3-Nano trending on HF. The Nemotron team is working hard to incorporate all your feedback into Nemotron 4. And yes, Nemotron 3 Ultra is still on track for release. huggingface.co/models?pipelin…
Bryan Catanzaro tweet media
English
15
35
171
30.5K
Hugging Face retweeté
Rémi Ouazan
Rémi Ouazan@remi_or_·
Synthetic data generation is now native in transformers 🔥 Last week, transformers continuous batching (CB) hit 84% of vLLM throughput. This week, we tuned torch.compile: now we are at 95% for 8K generation length 🦾 The gap isn't closing anymore. It's gone.💀
Rémi Ouazan tweet media
English
3
25
96
11.8K
Hugging Face retweeté
Victor M
Victor M@victormustar·
Now available on Hugging Face: hf-mount 🧑‍🚀 The team really cooked, still wrapping my head everything possible but you can do things like: - mount a 5TB dataset as a local folder and query only the parts you need with DuckDB (✅ works) - browse any model repo with ls/cat like it's a USB drive - use a shared read-write bucket as a team drive for ML artifacts - drop the init container that downloads models in your k8s pods - point llama.cpp at a mounted GGUF and run inference (infinite storage??)
Victor M tweet media
English
9
26
164
17.5K
Hugging Face retweeté
clem 🤗
clem 🤗@ClementDelangue·
Local AI is free, fast & secure! So today we're introducing hf-mount: attach any storage bucket, model or dataset from @huggingface as a local filesystem. This is a game changer, as it allows you to attach remote storage that is 100x bigger than your local machine's disk. This is also perfect for Agentic storage!! Let's go!
clem 🤗 tweet media
English
49
168
971
163.6K
Hugging Face retweeté
Ai2
Ai2@allen_ai·
Today we're releasing MolmoWeb, an open source agent that can navigate + complete tasks in a browser on your behalf. Built on Molmo 2 in 4B & 8B sizes, it sets a new open-weight SOTA across four major web-agent benchmarks & even surpasses agents built on proprietary models. 🧵
Ai2 tweet media
English
14
93
645
75.9K
Hugging Face retweeté
Julien Chaumond
Julien Chaumond@julien_c·
hf-mount Attach any Storage Bucket, model or dataset from @huggingface as a local filesystem This is a game changer, as it allows you to attach remote storage that is 100x bigger than your local machine's disk. This is also perfect for Agentic storage!! Read-write for Storage Buckets, read-only for models and datasets. Here's an example with FineWeb-edu (a 5TB slice of the Web): 1️⃣> hf-mount start repo datasets/HuggingFaceFW/fineweb-edu /tmp/fineweb It takes a few seconds to mount, and then: 2️⃣> du -h -d1 /tmp/fineweb 4.1T ./data 1.2T ./sample 5.3T . 🤯😮 Two backends are available: NFS (recommended) and FUSE Let's f**ing go 💪
Julien Chaumond tweet media
English
9
20
99
15.4K
Hugging Face retweeté
Sayak Paul
Sayak Paul@RisingSayak·
Last year, I got to collaborate on a number of serious projects at the intersection of Diffusers x optimization ⚡️ First, NONE of them were bootstrapped with any AI agents but pure domain knowledge and expertise. So, besides just feeling good, it's also very reassuring to me to know how important those two traits are. Now, coming to the projects that I think are worth mentioning: * `flux-fast`: Showing a combination of `torch.compile` + unscaled FP8 FA3 + no CPU-GPU sync + dynamic FP8 is great for accelerating Flux.1-*. github.com/huggingface/fl… * `torch.compile` x Diffusers: What does it take to get the most out of `torch.compile` in Diffusers across different user workloads? pytorch.org/blog/torch-com… * `lora-fast`: How to hotswap LoRAs into compiled models without incurring (slow) recompilation issues? How to set it up for success? github.com/huggingface/lo… * `zerogpu-brrr`: How to optimize a ZeroGPU HF Space with AOT + FA3 and other goodies? This helps save 💰 and improve the user experience of your ZeroGPU applications. huggingface.co/blog/zerogpu-a… Hopefully, this will make you realize there's still a LOT that you can do (preferably pairing with AI) if you're curious and deeply invested in stuff you care about.
English
3
11
75
12K
Hugging Face retweeté
Lewis Tunstall
Lewis Tunstall@_lewtun·
You can now pretrain LLMs entirely on the HF Hub 💥 Last week, @OpenAI launched a competition to see who can pretrain the best LLM in under 10 minutes. So over the weekend, I made a little demo to automate this end-to-end using the Hub as the infra layer: - Jobs to scale compute - Buckets to store all experiments - Trackio to log all the metrics The cool thing here is that everything is launched locally: no ssh shenanigans into a cluster or fighting with colleagues over storage and GPUs ⚔️ All that's left is coming up with new ideas, but luckily Codex can automate that part too 😁 Can I have a job now please @reach_vb 🙏?
GIF
English
14
40
240
66K
Hugging Face retweeté
Muratcan Koylan
Muratcan Koylan@koylanai·
If you're building anything in AI, the best skill you need to be using right now is hugging-face-paper-pages Whatever problem you're facing, someone has probably already published a paper about it. HF's Papers API gives a hybrid semantic search over AI papers. I wrote an internal skill, context-research, that orchestrates the HF Papers API into a research pipeline. It runs five parallel searches with keyword variants, triages by relevance and recency, fetches full paper content as markdown, then reads the actual methodology and results sections. The skill also chains into a deep research API that crawls the broader web to complement the academic findings. The gap between "a paper was published" and "a practitioner applies the insight" is shrinking, and I think this is a practical way to provide relevant context to coding agents. So you should write a skill on top of the HF Paper skill that teaches the model how to think about research, not just what to search for.
Muratcan Koylan tweet media
English
45
148
1.6K
93.6K
Hugging Face retweeté
AK
AK@_akhaliq·
Protected Spaces with Public URLs now available on Hugging Face You can now set your Spaces to protected, making them private on Hugging Face while still keeping their URL publicly accessible This is useful for deploying production-ready demos or internal tools without exposing model weights, prompts, or proprietary logic Combined with custom domains you can also host your website on Hugging Face changelog: huggingface.co/changelog/prot…
AK tweet media
English
7
4
33
16.2K
Hugging Face retweeté
Prithiv Sakthi
Prithiv Sakthi@prithivMLmods·
Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with @Gradio and integrated with @rerundotio , it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.
English
11
60
405
46K
Hugging Face retweeté
Zixuan Li
Zixuan Li@ZixuanLi_·
Don't panic. GLM-5.1 will be open source.
English
260
415
7.5K
816.5K
Hugging Face retweeté
Lee Robinson
Lee Robinson@leerob·
Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.
Fynn@fynnso

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

English
359
202
2.8K
1.4M
Hugging Face retweeté
clem 🤗
clem 🤗@ClementDelangue·
Looks like it’s confirmed Cursor’s new model is based on Kimi! It reinforces a couple of things: - open-source keeps being the greatest competition enabler - another validation for chinese open-source that is now the biggest force shaping the global AI stack - the frontier is no longer just about who trains from scratch, but who adapts, fine-tunes, and productizes fastest (seeing the same thing with OpenClaw for example).
Lee Robinson@leerob

Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.

English
57
122
1.1K
149.2K
Hugging Face retweeté
Ben Burtenshaw
Ben Burtenshaw@ben_burtenshaw·
cancel your weekend and come fix open source! you can train, build, eval a solution to deal with ai slop in open source repos. icymi, most major os repos are drowning in ai generated prs and issues. it's coming from multiple angles: - well intentioned contributors scaling too fast - students trying out ai tools and not knowing best practices - rampant bots trying to get anything merged we need a solution that allows already resource constrained maintainers to carry on doing their work, without limiting genuine contributors and/or real advancements in ai coding. let's build something that scales and enables folk to contribute more. we don't want to pull up the drawbridge. I made this dataset and pipeline from all the issues and PRs on transformers. It's updated hourly so you can get the latest versions.
Ben Burtenshaw tweet media
English
5
10
64
14K
Hugging Face retweeté
Leandro von Werra
Leandro von Werra@lvwerra·
Excited to release: AgentUI > a fresh chat interface - natively multi-agent > agents coordinate via reports and figures > plug+play any open/closed model as sub-agent > agents specialise in code, web search, multimodal... Try it here: huggingface.co/spaces/lvwerra…
English
12
42
204
38.6K