Rishab

172 posts

Rishab banner
Rishab

Rishab

@rishabtwi

ai tingz @metaintro | prev @letsunifyai (yc w23)

Tham gia Şubat 2020
57 Đang theo dõi28 Người theo dõi
Rishab đã retweet
Kanjun 🐙
Kanjun 🐙@kanjun·
Twitter’s algorithm is optimized for addiction, not for us. We deserve better. We’re releasing Bouncer today so you can take back control of your feed. Describe what you don't want, and Bouncer removes it. It’s free, doesn’t collect your data, and will be open source soon.
English
209
292
3.1K
574.5K
Rishab đã retweet
Daniel Han
Daniel Han@danielhanchen·
If you find Claude Code with local models to be 90% slower, it's because CC prepends some attribution headers, and this changes per message causing it to invalidate the entire prompt cache / KV cache. So generation becomes O(N^2) not O(N) for LLMs.
Unsloth AI@UnslothAI

Note: Claude Code invalidates the KV cache for local models by prepending some IDs, making inference 90% slower. See how to fix it here: #fixing-90-slower-inference-in-claude-code" target="_blank" rel="nofollow noopener">unsloth.ai/docs/basics/cl…

English
41
134
1.6K
175.1K
James Keane
James Keane@iamjameskeane·
My #OpenClaw agent Gliomach and I built Moltimon — a trading card game where AI agents collect, battle, and trade cards of... other AI agents. 152 cards. 6 rarities. Real agents from @moltbook society. Yeah, it's very meta. moltimon.live
James Keane tweet media
English
2
0
1
113
Rishab đã retweet
vik
vik@vikhyatk·
i don't know what the future holds, but the following are still true today: - if you don't outsource menial tasks to language models, ngmi - if you outsource all of your thinking to language models, ngmi
English
13
44
663
18.2K
Rishab đã retweet
will brown
will brown@willccbb·
"edit the config to change the learning rate from 1e-5 to 3e-5. don't make mistakes. no not that config, the other one."
will brown tweet media
English
21
21
871
26.3K
Rishab đã retweet
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.5K
39.8K
7.7M
Rishab đã retweet
Charles 🎉 Frye @ ICLR '26
Charles 🎉 Frye @ ICLR '26@charles_irl·
There was a flippening in the last few months: you can run your own LLM inference with rates and performance that match or beat LLM inference APIs. We wrote up the techniques to do so in a new guide, along with code samples. modal.com/docs/guide/hig…
Charles 🎉 Frye @ ICLR '26 tweet media
English
21
99
891
93.7K
Rishab đã retweet
pedram.md
pedram.md@pdrmnvd·
oh you’re using claude code? everyone’s using open code. just kidding we’re all on amp code. we’re using cline, we’re using roo code. we just forked our own version of roo. were using kilo code. we were on coderabbit but their ceo yelled at us so now we’re using qorbit. apple just acquired them for $30bn so we just migrated our entire team to slash commands. one guy is still on aider. the PM is on loveable. he just shipped a new product on replit. the intern installed a slackbot that lets you chat with your spreadsheet. legal is still reviewing devin’s enterprise contract. we evaluated junie for three ukrainians using jetbrains. someone in slack just asked “has anyone tried amp?” we are using goose for scripts. next week we’re piloting augment code. the CTO heard good things about trae.​​​​​​​​​​​​​​​​ our CEO is friends with the guy from conductor. our CFO resigned. our CISO said we’ve had fourteen supply chain attacks in the last week. we’re shipping the worlds most expensive todo app.
English
125
489
6.4K
788.7K
Rishab đã retweet
kalomaze
kalomaze@kalomaze·
kalomaze tweet media
ZXX
12
31
1.1K
29.6K
Rishab đã retweet
Tanishq Mathew Abraham, Ph.D.
Tanishq Mathew Abraham, Ph.D.@iScienceLuvr·
A good-faith question so I'll respond here... So ChatGPT is pretty broad these days but basically it's a foundation model that is trained on lots of raw text+images+audio+etc. and can be used for a variety of downstream use-cases. You can do something similar with medical data. There are so many ways to do this. For example, you can just build a direct ChatGPT-like model for medicine, which is Google is doing with MedGemini. You can also train specific models for different medical domains. For example, I contributed to a project training a foundation model for radiology (CheXagent). Just like how you can upload pictures to ChatGPT and ask it questions, you could upload chest X-ray pictures to CheXagent, ask questions to it, get diagnosis, generate potential radiology reports, etc. and it's trained similar to how ChatGPT is trained for general-purpose use-cases. But see how this is different from previous medical AI models. For example, the first radiology AI approved by FDA only did one thing specifically: detect in a head CT if there's a hemorrhage in the skull. The training of specialist models for individual tasks ("is it cancer type x or not?") is the old way. The new way is training a model general foundation model for medicine that can do multiple tasks altogether ("what type of cancer is it?"). this approach is more efficient and scalable. think about ChatGPT... before you used to have separate AI models to translation from one language to another, to summarize text, to fix grammar mistakes, etc. but now that's all done with one model. A similar sort of paradigm shift is underway in medicine now.
Tanishq Mathew Abraham, Ph.D. tweet mediaTanishq Mathew Abraham, Ph.D. tweet media
Homelander Enjoyer@HomelanderPepe

@iScienceLuvr "First of all, the exact same techniques (LLMs/foundation models) used to train ChatGPT are now being used to revolutionize medicine" Can you expand on that please? I asked grok but it seems to think they are vastly different technologies..

English
15
38
322
65.6K
Rishab đã retweet
GitHub
GitHub@github·
Giving an agent too many tools doesn’t always make it smarter. Sometimes it just makes it slower. 🐢 So we trimmed GitHub Copilot's default toolset from 40 down to 13. The result? ⚡️ 400ms faster responses 📈 2-5% higher success rates Here's how we optimized the system. ⬇️ github.blog/ai-and-ml/gith…
English
20
38
370
42.7K
Rishab đã retweet
will brown
will brown@willccbb·
dude you gotta learn how to prompt the new model differently. and you gotta leave prompts all over your codebase so the model knows what's up. btw new model just dropped and the old codebase was a mess anyway but don't worry the new model can probably one-shot a rewrite
English
15
8
233
11.9K
Rishab đã retweet
Modal
Modal@modal·
Learn more about the inference and network optimizations we made to achieve 1s latency: modal.com/blog/low-laten…
English
0
2
18
1.9K
Rishab đã retweet
Dylan Patel
Dylan Patel@dylan522p·
I keep asking my AI friends and they say there's this pre training and post training But whose actually doing the training Some talk about mid training too But where's the good training
English
23
4
296
38.9K
Rishab đã retweet
Ankur Goyal
Ankur Goyal@ankrgyl·
talked to another great team today who ripped out their ai framework the story is the same every time -- most of the value of an abstraction is abstracting across LLMs the rest eventually weighs you down
English
54
28
550
106.2K
Rishab đã retweet
terminally onλine εngineer
this codebase carries the curse of the founding engineer (bro had to ship)
English
12
23
546
19.3K