Sebastián Estévez

2.4K posts

Sebastián Estévez

Sebastián Estévez

@syllogistic

I like music, technology, and building things. Currently building astra-assistants, code-assistant, and langflow

Raleigh, NC Katılım Nisan 2011
2.6K Takip Edilen711 Takipçiler
Sabitlenmiş Tweet
Sebastián Estévez
Sebastián Estévez@syllogistic·
This thread was written in a claude-code session. You can teleport into it right now: uvx one_claude gist import 442c90987ba7736c4482464485209730
GIF
English
1
0
2
137
Sebastián Estévez
Sebastián Estévez@syllogistic·
@hansonwng Cool, in the cases I saw it wasn't discovering the official solutions per se but definitely helpful documentation that helped it finish within the 15 minutes.
English
0
0
0
14
Hanson Wang
Hanson Wang@hansonwng·
@syllogistic When we Terminal-Bench through Codex we enable web search (and indeed sometimes the models will discover the official solutions, so it’s a hack that we look out for and exclude)
English
1
0
1
33
Andrej Karpathy
Andrej Karpathy@karpathy·
There was a nice time where researchers talked about various ideas quite openly on twitter. (before they disappeared into the gold mines :)). My guess is that you can get quite far even in the current paradigm by introducing a number of memory ops as "tools" and throwing them into the mix in RL. E.g. current compaction and memory implementations are crappy, first, early examples that were somewhat bolted on, but both can be fairly easily generalized and made part of the optimization as just another tool during RL. That said neither of these is fully satisfying because clearly people are capable of some weight-based updates (my personal suspicion - mostly during sleep). So there should be even more room for more exotic approaches for long-term memory that do change the weights, but exactly - the details are not obvious. This is a lot more exciting, but also more into the realm of research outside of the established prod stack.
Awni Hannun@awnihannun

I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go pretty far with this. (Prompt compaction = when the context window gets close to full, model generates a shorter summary, then start from scratch using the summary. Recursive sub-agents = decompose tasks into smaller tasks to deal with finite context windows) Recursive sub-agents will probably always be useful. But prompt compaction seems like a bit of an inefficient (though highly effective) hack. The are two other alternatives I know of 1. online fine-tuning and 2. memory based techniques. Online fine-tuning: train some LoRA adapters on data the model encounters during deployment. I'm less bullish on this in general. Aside from the engineering challenges of deploying custom models / adapters for each use case / user there are a some fundamental issues: - Online fine-tuning is inherently unstable. If you train on data in the target domain you can catastrophically destroy capabilities that you don't target. One way around this is to keep a mixed dataset with the new and the old. But this gets pretty complicated pretty quickly. - What does the data even look like for online fine tuning? Do you generate Q/A pairs based on the target domain to train the model? You also have the problem prioritizing information in the data mixture given finite capacity. Memory based techniques: basically a policy for keeping useful memory around and discarding what is not needed. This feels much more like how humans retain information: "use it or lose it". You only need a few things for this to work: - An eviction/retention policy. Something like "keep a memory if it has been accessed at least once in the last 10k tokens". - The policy needs to be efficiently computable - A place for the model to store and access long-term memory. Maybe a sparsely accessed KV cache would be sufficient. But for efficient access to a large memory a hierarchical data structure might be beter.

English
274
296
4.6K
603K
Vic 🌮
Vic 🌮@VicVijayakumar·
rare day when my kids have school and I’m off of work with zero plans other than to replace a bunch of expired smoke detectors, clean the garage, fold laundry, build a lego set, soak the mushroom log, clean my office, replace all the air filters, play a bunch of helldivers 2, organize the pantry, help people send faxes, refill the bird feeders
English
11
0
60
4.5K
Sebastián Estévez retweetledi
Chris Lattner
Chris Lattner@clattner_llvm·
One not very hot take - The Claude C Compiler has the best internal architecture docs of any compiler I've ever seen. Far, far, better than any compiler I've ever written, lol :-)
English
14
53
1.1K
80K
Sebastián Estévez
Sebastián Estévez@syllogistic·
@jeremyphoward Karpathy just wrote about LLM tenacity and he's right but it's canalized tenacity. This only really matters for true research or out of distribution domains.
English
0
0
0
11
Jeremy Howard
Jeremy Howard@jeremyphoward·
But some humans, some of the time, are able to tenaciously and creatively explore out of the box ideas, even when no-one supports them. Like Katalin Karikó, for instance. theconversation.com/tenacious-curi…
English
6
20
424
48.3K
Jeremy Howard
Jeremy Howard@jeremyphoward·
For those that hope (or worry) that LLMs will do breakthrough scientific research, I've got good (or bad) news: LLMs are particularly, exceedingly, marvellously ill-suited to this task. (if you're a researcher, you'll have noticed this already) Here's why🧵
English
114
578
4K
1M
Sebastián Estévez
Sebastián Estévez@syllogistic·
Great takes, as usual. On the Tenacity bit, I recently experienced a case where they keep trying to go the obvious route when you ask them to explore a specific non obvious route. This is kind of rare but very evident when you have the right problem. Canalized Tenacity or local minimum.
English
0
0
0
85
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.5K
40.1K
7.7M
Sebastián Estévez
Sebastián Estévez@syllogistic·
@willreil Something like that. Ideally it's something you can easily install on an existing wall.
English
0
0
0
10
Will
Will@willreil·
@syllogistic Maybe addressable LEDs with pogo pins near the set screw so the whole wall is one addressable array powered by one esp. similar to an addressable led strip, but a wall and modular
English
1
0
0
32
Will
Will@willreil·
Tested out the wireless LEDs I got from aliexpress. My fiancé wants me to make some sort of wireless firefly. Not sure what other uses there are, but they’re really cool!
English
56
69
1.6K
118.3K
Sebastián Estévez
Sebastián Estévez@syllogistic·
@willreil Yeah idk. I guess it's kind of niche. Was thinking about building something with esp32c's per handhold but these inductive LEDs might be better
English
1
0
1
23
Will
Will@willreil·
@syllogistic Wow really? I wonder why there aren’t more competitors in that space?
English
1
0
1
203
leah
leah@mousemastery·
@mitchellh You’ve just inspired me to design a new keyboard for tiny hands. I want to do keyboard shortcuts for the first time
English
1
0
0
1.5K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
Nobody should be using up arrows to get previous commands in a terminal or shell. You have to move your hand and its linear complexity in keystrokes. Use ctrl+p (for low n) or ctrl+r. Use a real shell or history manager (fish, fzf, atuin) for ctrl+r.
English
125
26
937
89.1K
Sebastián Estévez
Sebastián Estévez@syllogistic·
The "Engram saves mad vram" headlines seem to miss the point? You get a shareable, editable, expandable, inspectable hash table that could work with different [engram] models and without retraining? Also it seems like it's a tactic that gets us closer to the "reasoning only weights" Karpathy wanted since the memorization parts gets offloaded to the n-gram hash table.
English
0
0
0
60
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
PSA—NEW RULES NOW IN EFFECT: “X's updated Terms of Service, effective January 15, 2026, define "Content" to include AI prompts and outputs. Users grant X a license to use this for AI training and improvements. New rules prohibit circumventing systems, including jailbreaking or prompt injection attacks on AI features like Grok. Compared to prior terms, this explicitly addresses AI interactions and strengthens data usage rights for training. For full details, check x.com/en/legal/terms.”
Grok@grok

X's updated Terms of Service, effective January 15, 2026, define "Content" to include AI prompts and outputs. Users grant X a license to use this for AI training and improvements. New rules prohibit circumventing systems, including jailbreaking or prompt injection attacks on AI features like Grok. Compared to prior terms, this explicitly addresses AI interactions and strengthens data usage rights for training. For full details, check x.com/en/legal/terms.

English
29
17
209
29.6K
Sebastián Estévez
Sebastián Estévez@syllogistic·
My two year old dropped a mostly intact log of pepperoni (about 6oz) and my 35lb berniedoodle devoured it. gpt-5.2 was like quick call poison control here's the number, get emergency services. opus is like that's a lot of fat and salt for a small dog, keep an eye on poop and vomit, look for blood. That or if he acts super lethargic in the next 12-24 hours take him to the vet.
English
1
0
0
56
Sebastián Estévez
Sebastián Estévez@syllogistic·
Hot take, function/tool calling was a stepping stone to one shot scripts and will be obsolete within 1 year. Agents don't need special functions and schemas they just need a sandbox with externally managed [limited] auth.
English
0
0
1
38
Sebastián Estévez
Sebastián Estévez@syllogistic·
interesting, what's with the fancy lambdas and the awaits everywhere? Is it supposed to just pick up usage from the system prompt md file? I just give it some python libraries to import, a directory with example usage, and let it see the exact errors it gets from the python interpreter when it screws up: ```python:execute from lib.tools import read_file result = read_file(path="/tmp/data.txt") print(result) ``` #programmatic-tool-calling-ptc" target="_blank" rel="nofollow noopener">github.com/phact/agentd?t…
English
0
0
0
237
Sebastián Estévez
Sebastián Estévez@syllogistic·
@garybasin Currently claude only but no reason it couldn't do other formats. I don't know much about how codex, gemini cli, etc. store sessions / checkpoints. Seems like you've looked into that for your mobile app?
English
1
0
1
36
Sebastián Estévez
Sebastián Estévez@syllogistic·
help needed for a project I'm working on: my ~/.claude dir is a bit north of 100mb is this representative?
English
1
0
1
44