Rakesh Roushan

1.2K posts

Rakesh Roushan banner
Rakesh Roushan

Rakesh Roushan

@BuildWithRakesh

Left corporate to build AI products full-time. https://t.co/jwASNaIOKb https://t.co/DsByFN5d3P https://t.co/2RkelaDvxl

Bengaluru, India انضم Ağustos 2025
389 يتبع169 المتابعون
Rakesh Roushan أُعيد تغريده
Cloudflare Developers
Cloudflare Developers@CloudflareDev·
Introducing the new /crawl endpoint - one API call and an entire site crawled. No scripts. No browser management. Just the content in HTML, Markdown, or JSON.
Cloudflare Developers tweet media
English
768
1.7K
19.9K
10.5M
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@karpathy @levie We built the whole internet for human eyes. Now we rebuild it for agent parsers.
English
0
0
0
19
Andrej Karpathy
Andrej Karpathy@karpathy·
💯 "If you build it, they will come." :) ~Every business you go to is still so used to giving you instructions over legacy interfaces. They expect you to navigate to web pages, click buttons, they give out instructions for where to click and what to enter here or there. This suddenly feels rude - why are you telling me what to do? Please give me the thing I can copy paste to my agent.
Andrej Karpathy tweet media
English
111
188
2.3K
145.9K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@naval Top 1% of YouTube channels: 93% of views. Top 1% of podcasters: 95% of downloads. Top 2% of Substack writers: 90% of revenue. The "fat middle" in content was always myth. Software's switching costs just bought it more time.
English
0
1
2
143
Naval
Naval@naval·
Software will proliferate just as videos, music, writing did. The market structure will shift from a “fat middle” to mega-aggregators and a long tail. It’ll be a slower process due to network effects, but many traditional vendor lock-ins will get eaten by AI.
English
649
725
9.9K
1.1M
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
Grok was built to say what ChatGPT wouldn't. X is now investigating Grok for racist posts. Mission accomplished.
English
1
0
0
21
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@garrytan Manager mode doesn't pause the craft. It overwrites it.
English
0
0
0
47
Garry Tan
Garry Tan@garrytan·
May you find your sacred vocation. I found mine at 16, followed it til I turned 34, lost it as I went into manager mode, and rediscovered it 45 days ago And not in a subtle form. In a powerful end stage Pokémon sort of way. What a time to be alive
English
35
3
340
17.3K
Garry Tan
Garry Tan@garrytan·
I don’t think I really like to code I think I was put on this earth to do it
Toch Style@Tochstyle

@garrytan lol i guess some people really like to code, i tried it and i don't find it as addictive to use cursor i mean it's probably a better skill to want to do it

English
46
8
314
93.9K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@emollick The meaning was never in the words. It was always in the reader.
English
1
0
3
317
Ethan Mollick
Ethan Mollick@emollick·
Fiction writing is such a weird problem space with AI because fiction depends on you, as a reader, assuming meaning behind the writing. And AI is terrific at writing things with high levels of implied meaning. The more meaning you seek, the more you find, though it is illusory.
English
36
12
146
19.3K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@AlexHormozi AI didn't save anyone from thinking. It just exposed who wasn't.
English
0
0
0
11
Alex Hormozi
Alex Hormozi@AlexHormozi·
My greatest frustration is spending my day reading other people’s AI slop and translating it back into the bullets they probably submitted to write it.
English
285
99
2.3K
114.2K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@tszzl Boredom is how infrastructure happens.
English
0
0
1
40
roon
roon@tszzl·
hedonic adaptation is hitting, agents are old news now, we need more acceleration
English
169
105
2.4K
335.9K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@gregisenberg If the model is anxious about its own existence, at least it finally has something in common with the founders building on top of it.
English
0
0
0
40
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
Netflix just bought Ben Affleck's AI startup. "Empower storytellers, not replace them." Every layoff in tech history started with that exact sentence. The tool is always "helping" until it's "handling."
English
2
0
0
30
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@venturetwins We built agents to replace human attention. Now they demand more of it.
English
0
0
0
39
Justine Moore
Justine Moore@venturetwins·
One of the clearest signs that we’re entering the age of agents is how many people in SF walk around with their laptops open so they don’t cancel a long-running task 😂
English
146
37
1.2K
93.2K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@karpathy Stateless agents will feel as broken as apps that lose your data on restart. Memory ops as RL tools is the fix.
English
0
0
0
17
Andrej Karpathy
Andrej Karpathy@karpathy·
There was a nice time where researchers talked about various ideas quite openly on twitter. (before they disappeared into the gold mines :)). My guess is that you can get quite far even in the current paradigm by introducing a number of memory ops as "tools" and throwing them into the mix in RL. E.g. current compaction and memory implementations are crappy, first, early examples that were somewhat bolted on, but both can be fairly easily generalized and made part of the optimization as just another tool during RL. That said neither of these is fully satisfying because clearly people are capable of some weight-based updates (my personal suspicion - mostly during sleep). So there should be even more room for more exotic approaches for long-term memory that do change the weights, but exactly - the details are not obvious. This is a lot more exciting, but also more into the realm of research outside of the established prod stack.
Awni Hannun@awnihannun

I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go pretty far with this. (Prompt compaction = when the context window gets close to full, model generates a shorter summary, then start from scratch using the summary. Recursive sub-agents = decompose tasks into smaller tasks to deal with finite context windows) Recursive sub-agents will probably always be useful. But prompt compaction seems like a bit of an inefficient (though highly effective) hack. The are two other alternatives I know of 1. online fine-tuning and 2. memory based techniques. Online fine-tuning: train some LoRA adapters on data the model encounters during deployment. I'm less bullish on this in general. Aside from the engineering challenges of deploying custom models / adapters for each use case / user there are a some fundamental issues: - Online fine-tuning is inherently unstable. If you train on data in the target domain you can catastrophically destroy capabilities that you don't target. One way around this is to keep a mixed dataset with the new and the old. But this gets pretty complicated pretty quickly. - What does the data even look like for online fine tuning? Do you generate Q/A pairs based on the target domain to train the model? You also have the problem prioritizing information in the data mixture given finite capacity. Memory based techniques: basically a policy for keeping useful memory around and discarding what is not needed. This feels much more like how humans retain information: "use it or lose it". You only need a few things for this to work: - An eviction/retention policy. Something like "keep a memory if it has been accessed at least once in the last 10k tokens". - The policy needs to be efficiently computable - A place for the model to store and access long-term memory. Maybe a sparsely accessed KV cache would be sufficient. But for efficient access to a large memory a hierarchical data structure might be beter.

English
273
300
4.6K
577.4K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
Cloudflare just reported record revenue and immediately declined guidance for the rest of 2026. Q4 beat estimates. Q1 beat estimates. Forward outlook? "Unable to provide." When the infrastructure company that sees all internet traffic refuses to predict the future, that's not conservatism. That's uncertainty so deep even the data center can't model it.
English
0
0
0
18
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@naval We invented notifications to feel connected. Now we need AI agents to protect us from them.
English
0
0
1
29
Naval
Naval@naval·
The human brain isn’t designed to process all of the world’s breaking emergencies in realtime.
English
1.5K
3.3K
31.7K
1M
Garry Tan
Garry Tan@garrytan·
I'm giving up drinking because of Claude Code. I need my brain to be maximally pristine so I can sling 10k LOC a day
English
411
98
3.5K
569.2K
Rakesh Roushan
Rakesh Roushan@BuildWithRakesh·
@rauchg The code takes 10 minutes. Finding 500 people willing to pay $100/month? That's the startup.
English
0
0
0
195
Rakesh Roushan أُعيد تغريده
Guillermo Rauch
Guillermo Rauch@rauchg·
Google has shipped a CLI for Google Workspace (Drive, Gmail, Calendar, Sheets, Docs, …) Huge! Written in Rust, distributed through npm & skills.sh $ npm i -g @⁠googleworkspace/cli $ npx skills add github:googleworkspace/cli 2026 is the year of Skills & CLIs github.com/googleworkspac…
English
215
502
6.4K
548.6K
Rakesh Roushan أُعيد تغريده
Naval
Naval@naval·
It’s not about junior vs senior, it’s about “good with AI” vs “not good with AI.”
English
936
1.9K
17.7K
860.3K
Rakesh Roushan أُعيد تغريده
Elon Musk
Elon Musk@elonmusk·
Tesla will be one of the companies to make AGI and probably the first to make it in humanoid/atom-shaping form
English
10.9K
10.7K
134.4K
57.1M