sameel arif

3K posts

sameel arif banner
sameel arif

sameel arif

@endpoint

agents @browserbase

sf Beigetreten Ekim 2020
293 Folgt3K Follower
Angehefteter Tweet
sameel arif
sameel arif@endpoint·
i’m moving to san francisco and joining @browserbase full-time. a little over a year ago, i had no "real" experience on my resume. after selling my last company and 400+ cold applications: nothing. one warm intro later, i hopped on a call with @pk_iv. shortly after, i joined as an intern. today, i’m graduating from that role - and coming back full-time. i’m deeply grateful for the opportunity, and especially for the risks people took on me - then and now. there are only a handful of category-defining companies: twilio, stripe, vercel… i believe browserbase is next. 2026 is our year. إن شاء الله
sameel arif tweet media
English
70
8
318
21K
sameel arif
sameel arif@endpoint·
3 trillion dollar business btw
sameel arif tweet media
English
3
0
13
1.5K
sameel arif retweetet
hamza mostafa
hamza mostafa@hamostaf04·
for those of you that don't have a GPU handy to play around with, i built a small fork of the repo that lets your coding agent tinker and experiment using a GPU on the cloud using @modal sandboxes w/ updated instructions in README and program.md. link in comments. enjoy :)
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
14
3
81
7.2K
Johannes Koch
Johannes Koch@Lockhead·
@endpoint Nice picture but if thats the only market - aren't you a bit limited?
English
1
0
6
2.8K
sameel arif
sameel arif@endpoint·
they call it mount tam because you can see your total addressable market from there
sameel arif tweet media
English
76
161
4.4K
177.5K
Nebula
Nebula@NebulaAI·
INTRODUCING: OUR BIGGEST UPDATE YET 🚨 You can now use 300+ models to create AI agents on Nebula. This is the best way to test the latest models in real-world use cases. Plus, Agents are more autonomous by default. Give Nebula a task, and it will spawn sub-agents and triggers to achieve your goal. Try today for free:
English
9
3
46
7.1K
Blake
Blake@BlakeL58·
@endpoint You post a lot on this platform.
English
1
0
1
22
Linda Chen
Linda Chen@linderps·
POV: feeding the birds at @browserbase We’re hiring, I’ll feed u cookies
Linda Chen tweet media
English
9
1
64
4.8K
Muhib
Muhib@muhibwqr·
muslims. we're 1/2 through ramadan. lock in making du'as from the qur'an and sunnah should be simple. so i'm launching duaos.com "And to Allah belong the best names, so invoke Him by them" (Qur'an 7:180). du'aOS uses: > Semantic search, your intention is matched to the 99 Names of Allah and verified hadith & quran > Hybrid ranking, vector + keyword so the right du'as surface > AI refinement, turns that into a personalized du'a in a Prophetic style (bound by strict parameters that prevent random generation) You get: > Name of Allah > Hadiths > Quran (related) > Your refined du'a > Store your du'a locally in cache so you have it available as you need No guessing. No hallucinated sources. Just intent → Name → hadith → Quran → your du'a. check it out:  > duaos.com this is a opensource repo, feel free to contribute and hit me up w suggestions etc.
English
13
25
224
10.6K
sameel arif
sameel arif@endpoint·
in case you’ve wondered how package versioning works
sameel arif tweet media
English
49
539
7.4K
186.8K