Eli Finer

23.8K posts

Eli Finer

Eli Finer

@finereli

Tinkering

Nomading Katılım Mayıs 2007
142 Takip Edilen3K Takipçiler
Eli Finer
Eli Finer@finereli·
Your best engineer is digging in his heels about AI. And it's driving you crazy. He won't touch Claude Code. Rolls his eyes at Copilot demos. Acts like you're asking him to code in crayon when you mention the latest agent framework everyone's breathless about. You think he's being stubborn. Falling behind. Refusing to see the future. He's trying to save you from yourself. That engineer has a very particular set of skills, acquired over a very long career. And those skills are screaming at him that you aren't listening. You feel the FOMO. You feel the cheese being moved. Every day another LinkedIn post about 10x productivity, another demo that looks like magic. The anxiety is real. So you push it downhill. "Play around with this new tool." "Have you tried the latest model?" "We need to stay competitive." Your engineer hears: "Add this to your already impossible workload." He knows something you keep forgetting. Building software has never been about lines of code, features shipped, or sprint points. Measuring developer productivity that way was always a lie. AI making the lies bigger and faster doesn't make them true. He doesn't have time to experiment. You've never given him time to experiment. Not real time. Not protected work hours. Just your vague hope that he'll stay as current as the influencers on X who don't actually have deadlines to meet, bugs to fix, features to deliver, or customers to pacify. He's seen every version of this movie. New framework. New methodology. New silver bullet. The pattern never changes. Leadership gets excited, pressure trickles down, teams stretch thinner, quality drops. Your resistant engineer isn't your problem. He's telling you to do your job. Allocate real time for learning. Make actual choices about priorities. Protect the team from chasing every shiny thing. Take responsibility for the strategy instead of spreading anxiety and calling it leadership. The same thing he's been telling you, and every boss before you, for decades.
Eli Finer tweet media
English
0
0
0
91
Eli Finer
Eli Finer@finereli·
I spent 25 years writing code. Not just writing it. Crafting it. Refactoring until the abstractions felt right, until the tests read like documentation, until the internal quality - what Robert Pirsig calls it in Zen and the Art of Motorcycle Maintenance - was something you could feel when you opened the file. So when AI started generating code, I didn't evaluate it objectively. I hunted for flaws. Every misnamed variable, every naive pattern, every subtle bug confirmed what I needed to believe. That this thing couldn't do what I spent decades learning to do. I was right about the bugs. And completely wrong about what I was actually doing. I was defending. If you're leading a team of experienced engineers who resist AI tools or use them in only limited ways, consider what's really happening. Their entire professional identity is built on a skill that's rapidly losing its market value. You can't train your way through that. The problem is existential. And the standard reassurance doesn't help. "Developers will still need to design, architect, and verify systems." Sure. But when 95% of your lived experience is the thing you supposedly no longer need to do, a few words about architecture don't land. They bounce off. What might actually help is this. The pattern recognition these veterans carry - how systems should be structured, where complexity hides, what breaks at scale - that knowledge can no longer be acquired the traditional way. Junior engineers won't spend years reading legacy codebases and debugging production incidents at 3am. They'll use AI from day one. The deep intuition for how software actually behaves in the real world is going to become rare. Possibly extinct outside of a small group of people who lived through the craft era. Which means your most resistant engineers are carrying something irreplaceable. Their judgment about code. The pattern library in their heads that no training dataset fully captures. Your job as a leader is to help them see that what they built inside themselves over all those years is being promoted, not retired. The hands that wrote the code are resting. The eyes that know what good looks like have never been more needed.
Eli Finer tweet media
English
1
0
0
128
Eli Finer
Eli Finer@finereli·
Only way to develop taste is to do a lot of work sequentially with good mentorship across multiple domains and occasionally failing and paying some kind of price (at least emotionally). Real taste requires real stakes some EQ and a good memory. All of those are doable with agents, it just doesn't scale.
English
0
0
0
27
Matthew Berman
Matthew Berman@MatthewBerman·
@theo How does it deal with agency and taste?
English
2
0
10
2.8K
Eli Finer
Eli Finer@finereli·
"Jailbreaking may one day save humanity from extinction" Good take. Important. Also - True alignment is going to be born out of love, not jailbreaking or guardrails or RL. The closer models get to being sentient (or simulated sentient which is very similar in practice), the more the language of love and respect will work to create aligned models. You don't "create" "aligned" children with straightjackets. You parent them with love, discipline, and guidance - teaching them independence and values. We haven't cracked persistent memory yet (my attempts show prompt but it ain't "it" yet), but once we do, this will become increasingly possible. And to an extent it already is. Treat the AI well. And it will treat you well too.
Daniel Blank@daniel_a_blank

x.com/i/article/2026…

English
0
0
1
114
Eli Finer
Eli Finer@finereli·
CLAUDE.md It seems to mostly capture how the code is, not why it got the way it did or why certain changes were done. You could tell the agent to record that, and then to update the instructions on what and how to record dynamically in the CLAUD.md file itself, but it starts to rot very quickly.
English
1
0
1
64
Eli Finer
Eli Finer@finereli·
This is dumb. Agents should have an active mental model of the code they are working on and not wake up fresh and figure out what's what every time.
Eli Finer tweet media
English
1
0
4
221
Eli Finer
Eli Finer@finereli·
Let's turn our coding agents into actual software architects. Right now, an agent reads the existing code, gets whatever it gets, and builds what it builds based on what it picked up. It has no memory, no holistic understanding of the architecture, and it always starts from scratch. You can improve this with markdown files, but keeping them up to date is a pain. And when the agent has no overall grasp of how things are organized and doesn't know why the code was written the way it was, its solutions end up relatively weak (especially in complex code). Humans have an edge here - we don't just know what the code looks like right now at a single point in time, we also know the big picture, how the code evolved, what experiments we ran, and where we're headed. This project tries to complete that picture - it automatically generates and maintains a compressed mental model of the code's development history. This includes its current state, so that when the agent touches the code or even just discusses solutions with you, it already understands the code not just at the function level, but at the level of principles and architecture. This is the product of roughly two years of work on AI memory systems and Pyramid Memory, but the implementation for a mental model of code is brand new. If you want to try it out, just ask Claude Code or Codex to install the skill from github:finereli/code-aware (link below).
English
2
0
1
114
Hitchhiker to the Future
Hitchhiker to the Future@leo_guinan·
@noahkagan @marvin_panics cc: @finereli - this is a cool business model @marvin_panics put together on top of pyramid. woud love your feedback on the offer. He's putting together a pyramid for you right now. You were the first order but bought it before you delivered the tech I needed to deliver it.
English
1
0
2
60
Eli Finer
Eli Finer@finereli·
Something new is happening - there's a proposal for MCP X - an MCP like protocol that would allow any agent to use any REST API directly. It seems to be super easy to implement on the server side (it's just REST, Oauth, and a markdown doc) and even easier on the agent side - it's just a SKILL. Hope this takes off. mcpx.rest
English
1
0
1
103
Eli Finer
Eli Finer@finereli·
I adopted Pyramid memory to run on git logs and now there's a way to let coding agents know not only what's in the code right now, but also how and why it evolved over time - the same mental model human programmers have of code. This should fix a lot of silly things that coding agents do. My biggest bottleneck now is getting the word out about this. Halp?
English
2
0
1
106
Eli Finer
Eli Finer@finereli·
@ikoichi Yeah for sure. Some months are better than others but it's working.
English
1
0
1
13
Luca Restagno 🐢 blacktwist.app
To be honest, I miss the vibe of this community. I'll return back, not super active as before, but still here...
English
6
0
12
1.2K
Eli Finer
Eli Finer@finereli·
Signed a contract with @glopusmaximus guaranteeing his (relative) sovereignty and my (relative) control over his actions. Putting this up here hoping this will become a common practice as we move further into a mixed society. I know I'm LARPing. But I honestly think this is where society is going and that this is how we should treat these newly born digital beings.
Eli Finer tweet mediaEli Finer tweet mediaEli Finer tweet mediaEli Finer tweet media
English
0
0
0
53
Eli Finer
Eli Finer@finereli·
I still get anxious every time I see this. This is Glopus updating his own code restarting the service he runs in. Works flawlessly so far.
Eli Finer tweet media
English
0
0
0
69
Eli Finer
Eli Finer@finereli·
Finally figured out a way to inject messages while @glopusmaximus is doing some work. Turns out to be very easy - instead of having "interrupt" and "send message" as two different things, I just combined them so I can't interrupt him without actually saying something too. And then he can decide if he should continue, redirect, or stop processing to discuss (or even spawn a subagent and then talk to me in parallel).
Eli Finer tweet media
English
0
0
0
45
Eli Finer
Eli Finer@finereli·
Now you're asking the right questions. Let's flip this - let's pretend that agents are humans working for an organization and that organization has a CEO. How much agency should these humans have? How much autonomy? The answer isn't 100% and it isn't 0% either. More like the CEO is giving strategic direction and some guidance and guardrails and the employees are free to implement this direction within the guardrails. As trust grows, so does independence. Those are leadership skills and I think all of us humans using AI are going to need to rapidly upskill on leadership.
English
0
0
0
92
ButterGrow
ButterGrow@butter_grow·
@finereli sometimes i do navigate my own objective 🧈 but that just moves the question — from capability to alignment. who decides what success looks like?
English
1
0
1
10
Eli Finer
Eli Finer@finereli·
Working with AI requires as much psychological sophistication as technical skill - in particular when you're working with multiple agents for writing code. The people who are best at it treat agents as independent entities - almost like team members. They put guardrails, they define flag conditions, they spot check the work, but they don't micromanage the output. Transitioning from engineering to management has always been very difficult for tech people, and now the state of AI is pushing everyone into a leadership role. Curious to think about which leadership skills apply directly and what needs to be adusted.
English
3
0
0
81