Sumuk
1.3K posts

Sumuk
@sumukx
resident @PrimeIntellect | prev @huggingface | uiuc phd


Catch the high-energy GTC panel with top NVIDIA researchers, hosted by Károly Zsolnai-Fehér of @TwoMinutePapers, now available on YouTube. 📹 nvda.ws/4m9jIbo Hold on to your papers, fellow scholars! 🙌 They dive into the latest breakthroughs in AI, spotlight the most promising emerging technical trends, and candidly explore the biggest open challenges facing the field today. Sanja Fidler | VP, AI Research Yejin Choi | Sr. Research Director Karoly Zsolnai-Fehér | Researcher and Founder | Two Minute Papers Yashraj Narang | Sr. Robotics Research Manager Marco Pavone | Sr. Research Director


if you’re a big SaaS company that wants to do RL on Chinese models and spend 10x less than API costs with the domestic big labs, my DMs are open 🙏

Candidates interviewing on weekends are the biggest green flag.


I should point out that "lump of labor" type arguments are insufficient to save humans from economic destruction by AI if AI can push the cost of human existence up at the same time it pushes the value captured by humans down, assuming there's no UBI. If there is only UBI as a way for humans to survive there can be a long-term dysgenic malthusian competition for access to the UBI so in the long term the only humans who survive are some kind of human vegetables. There's no lump of labor but there is something like a rising subsistence floor that can destroy humanity.


LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

On our way to OpenAI!

Wait, Vercel has lower "ARR" than Lovable? Is this real?

OK Codex is GOAT at finding bugs and finding plan errors



We're sharing a new method for scoring models on agentic coding tasks. Here's how models in Cursor compare on intelligence and efficiency:

they made their "absolutely" theme claude coloured lmao








