Adam Gabriel

36.8K posts

Adam Gabriel banner
Adam Gabriel

Adam Gabriel

@THEAdamGabriel

#MedSynth™🧬 #Polyminventor™💡 #StructureBasedDrugDesign💊#SBDD💊#AgenticAI🚀#SpatialComputing 🥽#AR #VR #MR🕶️#Polymath🧠 #Polyglot🇺🇸🇪🇬🇫🇷

Financial District, Manhattan Katılım Kasım 2011
4.3K Takip Edilen18.7K Takipçiler
Sabitlenmiş Tweet
Adam Gabriel
Adam Gabriel@THEAdamGabriel·
@garrytan I'm the #Polyminventor💡™ of MedSynth™: US Patent Pending Agentic, Spatial, Structure Based Drug Design (#SBDD💊). 🧬 I taught my Molty (@openclaw ) Drug Discovery and Design – #StepByStep🚶🏻. To me this is personal. Mom caught drug-resistant Pneumonia — twice. Almost #died☠️
Adam Gabriel tweet media
English
0
1
2
303
Matt Shumer
Matt Shumer@mattshumer_·
If you want to try a new personal agent that is just... insanely good, comment + DM me.
English
273
7
176
80.7K
Adam Gabriel retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
974
2.1K
19.4K
3.6M
Adam Gabriel retweetledi
Pascal Bornet
Pascal Bornet@pascal_bornet·
Forget about model audits, AI governance frameworks, or alignment committees. If your “AI strategy” is locked inside a slide deck that only a few executives have access to, you are basically untouchable in 2026. Back in the 90s, we trusted a plastic box with a key. Today, we trust a dashboard with a password. Same confidence. Better graphics. Are we building real intelligence systems… or just shinier fortresses? #AI #Technology #Security #Data #ArtificialIntelligence
Pascal Bornet tweet media
English
4
4
10
1.6K
Adam Gabriel retweetledi
Anthropic
Anthropic@AnthropicAI·
New Anthropic research: Measuring AI agent autonomy in practice. We analyzed millions of interactions across Claude Code and our API to understand how much autonomy people grant to agents, where they’re deployed, and what risks they may pose. Read more: anthropic.com/research/measu…
English
283
477
3.6K
1M
Adam Gabriel retweetledi
Deedy
Deedy@deedydas·
Some backstory on why this prompt: in college I was obsessed with graphics. My computer graphics professor that I TA’d for (CS4620/5620 at Cornell: if you’re curious you can google the curriculum) was an academy award winner for animation. In advanced graphics class we had free reign on our final project and the one our team did was a 3D “infinite map” game engine with a focus on water and underwater gameplay. Realtime water rendering adds a ton of complexity from splashing, caustics, light absorption, god rays, bubbles and many more. We spend the greater part of many weeks getting this right and debugging shaders was an absolute nightmare (print statements don’t help much) and documentation was sparse. Ever since AI came along many years later a lot of my personal eval is therefore computer graphics tasks. Gemini just nailed a relatively hard one which is just rendering ocean surface simulation but there are tons more!
English
10
4
95
108.7K
Adam Gabriel retweetledi
Deedy
Deedy@deedydas·
Wow, Gemini 3.1 Pro just crushed one of my test problems no other model could solve. "Write a photorealistic 3D ocean simulation" 3D graphics is extremely hard. It has to encode 9-10 techniques based in physics. Being off by just 1 number can lead to bonkers results
English
755
151
2.5K
5M
Adam Gabriel retweetledi
Deedy
Deedy@deedydas·
The actual prompt was actually a bit more complex. It needed Gerstner Waves, Schlick’s Fresnel approximation, Blinn-Phong specular highlights, ACES Filmic Tone Mapping, Subsurface scattering, Fractional brownian motion, Wave grouping, Geometric splashing, Volumetric foam shading! The one thing that didn't work were the Voronoi bubble trails! Link: ocean.oneapp.dev Code: pastebin.com/vryt001f
English
24
5
170
153.1K
Adam Gabriel retweetledi
Tina Koskima 🕊️
Tina Koskima 🕊️@LoveSongs4Peace·
Scenic Cauldron Falls in Yorkshire Dales National Park, North Yorkshire, England, UK
Tina Koskima 🕊️ tweet media
English
9
97
387
3K
Adam Gabriel retweetledi
Tina Koskima 🕊️
Tina Koskima 🕊️@LoveSongs4Peace·
Scenic view to Lake Vorderer Gosausee at the the Gosau Valley in Austria 🇦🇹
Tina Koskima 🕊️ tweet media
English
24
231
768
7.1K
Adam Gabriel retweetledi
Tina Koskima 🕊️
Tina Koskima 🕊️@LoveSongs4Peace·
Beautiful blue day at the scenic Bow Lake in Banff National Park, Alberta, Canada 🇨🇦
Tina Koskima 🕊️ tweet media
English
26
204
713
7.4K
sabir hussain
sabir hussain@sabir_huss50540·
I build systems that print results even when you’re busy. Ultimate Claude Mastery Guide 80+ Chapters • 1000+ Tools • 2000+ Prompts I’m dropping it FREE for the next 24 hours (then it’s paid). ✅ Copy-paste workflows ✅ Templates + checklists ✅ Real use-cases (content, business, ops, money) Reply “CLAUDE” and I’ll DM it. (Follow required to receive.)
sabir hussain tweet media
English
480
57
259
37.5K
NOVA
NOVA@TechWith_Nova·
Nano Banana Pro + MassContent + MassUGC = AI vid factory this is the gritty process of: -> setting up 200 accounts -> automating 600+ AI posts per day it's the dark funnel powering organic brands rn like, rt + comment "DARK" for the 2026 mass content factory playbook in dms.
English
156
83
212
12.2K