Hugging Face

13.1K posts

Hugging Face banner
Hugging Face

Hugging Face

@huggingface

The AI community building the future. https://t.co/TpiXQMQ9rZ

NYC and Paris and 🌏 Bergabung Eylül 2016
220 Mengikuti648.4K Pengikut
Hugging Face me-retweet
Muratcan Koylan
Muratcan Koylan@koylanai·
If you're building anything in AI, the best skill you need to be using right now is hugging-face-paper-pages Whatever problem you're facing, someone has probably already published a paper about it. HF's Papers API gives a hybrid semantic search over AI papers. I wrote an internal skill, context-research, that orchestrates the HF Papers API into a research pipeline. It runs five parallel searches with keyword variants, triages by relevance and recency, fetches full paper content as markdown, then reads the actual methodology and results sections. The skill also chains into a deep research API that crawls the broader web to complement the academic findings. The gap between "a paper was published" and "a practitioner applies the insight" is shrinking, and I think this is a practical way to provide relevant context to coding agents. So you should write a skill on top of the HF Paper skill that teaches the model how to think about research, not just what to search for.
Muratcan Koylan tweet media
English
37
106
1.2K
65.3K
Hugging Face me-retweet
AK
AK@_akhaliq·
Protected Spaces with Public URLs now available on Hugging Face You can now set your Spaces to protected, making them private on Hugging Face while still keeping their URL publicly accessible This is useful for deploying production-ready demos or internal tools without exposing model weights, prompts, or proprietary logic Combined with custom domains you can also host your website on Hugging Face changelog: huggingface.co/changelog/prot…
AK tweet media
English
6
4
28
13.2K
Hugging Face me-retweet
Prithiv Sakthi
Prithiv Sakthi@prithivMLmods·
Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with @Gradio and integrated with @rerundotio , it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.
English
9
60
399
44.7K
Hugging Face me-retweet
Zixuan Li
Zixuan Li@ZixuanLi_·
Don't panic. GLM-5.1 will be open source.
English
259
413
7.5K
811.4K
Hugging Face me-retweet
Lee Robinson
Lee Robinson@leerob·
Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.
Fynn@fynnso

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

English
359
204
2.8K
1.4M
Hugging Face me-retweet
clem 🤗
clem 🤗@ClementDelangue·
Looks like it’s confirmed Cursor’s new model is based on Kimi! It reinforces a couple of things: - open-source keeps being the greatest competition enabler - another validation for chinese open-source that is now the biggest force shaping the global AI stack - the frontier is no longer just about who trains from scratch, but who adapts, fine-tunes, and productizes fastest (seeing the same thing with OpenClaw for example).
Lee Robinson@leerob

Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.

English
57
121
1.1K
147.8K
Hugging Face me-retweet
Ben Burtenshaw
Ben Burtenshaw@ben_burtenshaw·
cancel your weekend and come fix open source! you can train, build, eval a solution to deal with ai slop in open source repos. icymi, most major os repos are drowning in ai generated prs and issues. it's coming from multiple angles: - well intentioned contributors scaling too fast - students trying out ai tools and not knowing best practices - rampant bots trying to get anything merged we need a solution that allows already resource constrained maintainers to carry on doing their work, without limiting genuine contributors and/or real advancements in ai coding. let's build something that scales and enables folk to contribute more. we don't want to pull up the drawbridge. I made this dataset and pipeline from all the issues and PRs on transformers. It's updated hourly so you can get the latest versions.
Ben Burtenshaw tweet media
English
5
10
63
13.7K
Hugging Face me-retweet
Leandro von Werra
Leandro von Werra@lvwerra·
Excited to release: AgentUI > a fresh chat interface - natively multi-agent > agents coordinate via reports and figures > plug+play any open/closed model as sub-agent > agents specialise in code, web search, multimodal... Try it here: huggingface.co/spaces/lvwerra…
English
11
42
204
38.1K
Hugging Face me-retweet
Sudo su
Sudo su@sudoingX·
this guy has 29 models on huggingface at page 2 ranking. no lab behind him. no sponsorship. $2,000 from his own pocket on GPU rentals. he compressed GLM-4.7 to run on a MacBook and quantized Nemotron Super the week it dropped. all public. all free. nvidia is a trillion dollar company with hundreds of teams but they are not the ones quantizing models middle of the night and pushing them out before sunrise. if nvidia stopped tomorrow their employees stop working. people like @0xSero would not. that is the difference between a paycheck and a mission. @NVIDIAAI you talk about making AI accessible. the people actually doing it are right here. 29 models deep burning their own compute with no ask except more hardware to keep going. you do not need to build another program. just look at who is already building for you. one GPU to this man would produce more public value than a hundred internal sprints. i am not asking for charity. i am asking you to invest in someone who already proved it.
Sudo su tweet media
0xSero@0xSero

Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏

English
181
1.1K
12.5K
746.8K
Hugging Face me-retweet
clem 🤗
clem 🤗@ClementDelangue·
Talked with @dee_bosa @CNBC about @nvidia and everything open-source AI! Some key points: - Nvidia is the new American open-source AI king - 30% of fortune 500 are using Hugging Face and our goal is to get to the majority of them by the end of the year - Agents will be much more open-source based than chatbots (ex OpenClaw) - Agents empower all to train, fine-tune, and run their own models based on open-source - We crossed 15M AI builders on HF and hope to have as much agents using the platform by the end of the year. Agents are the new users and customers of tech platforms
English
14
18
119
29.9K
Hugging Face me-retweet
Xenova
Xenova@xenovacom·
Not enough people are talking about NVIDIA's new Nemotron-3-Nano (4B) model! 🤯 Hybrid Mamba + Attention architecture, designed as a unified model for reasoning and non-reasoning tasks. So small and efficient, it can run 100% locally in your web browser at 75 tokens per second.
English
19
64
468
48.6K
Hugging Face me-retweet
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
OpenMed is the #1 most referenced organization powering open-source AI research on Hugging Face. This is the official State of Open Source report, Spring 2026. I started this 8 months ago.
Maziyar PANAHI tweet media
English
9
9
79
18.3K
Hugging Face me-retweet
Hugging Face me-retweet
clem 🤗
clem 🤗@ClementDelangue·
Nvidia just crossed Google as the biggest org on @huggingface with 3,881 team members on the hub. I'm officially calling it: Nvidia is the new American king of open-source AI!
clem 🤗 tweet media
English
47
92
822
129.3K
Hugging Face me-retweet
DailyPapers
DailyPapers@HuggingPapers·
Introducing the Daily Papers SKILL.md Enables agents to > read paper content as markdown > search papers > find linked @huggingface models and datasets > fetch the papers API > and more! Link below ⬇️
English
9
38
269
32.8K
Hugging Face me-retweet
Daniel van Strien
Daniel van Strien@vanstriendaniel·
Is olmOCR-bench getting close to saturation? Top score is now 85.9%. Yesterday @datalabto took #1 with chandra-ocr-2. A year ago, the best was 79. Visualised the race to get there using @huggingface leaderboard data
English
7
11
59
16.4K
Hugging Face me-retweet
Nathan
Nathan@nathanhabib1011·
NEW SOTA OCR MODEL DROPPED Congrats to @VikParuchuri and team for releasing Chandra OCR 2! - 85.9% on olmocr bench, making it first place 🏆 - 90+ language support - 4B model - Full layout information - Extracts + captions images and diagrams - Strong handwriting, math, form, table support Compare every OCR model on the hub and choose the one adapted to your needs 👇
Nathan tweet media
English
8
41
392
34.3K
Hugging Face me-retweet
Mistral AI for Developers
🔥 Meet Mistral Small 4: One model to do it all. ⚡ 128 experts, 119B total parameters, 256k context window ⚡ Configurable Reasoning ⚡ Apache 2.0 ⚡ 40% faster, 3x more throughput Our first model to unify the capabilities of our flagship models into a single, versatile model.
Mistral AI for Developers tweet media
English
90
328
2.6K
377.5K