Ilpo Leppänen

4.6K posts

Ilpo Leppänen

Ilpo Leppänen

@ileppane

Katılım Ocak 2024
4.2K Takip Edilen268 Takipçiler
Ilpo Leppänen retweetledi
0xSero
0xSero@0xSero·
“How do you _” Just talk to it “I really need to _” Just talk to it “My computer doesn’t support _” Just talk to it “I don’t have experience with _” Just talk to it “What should I _” Just talk to it “Does _ make sense?” Just talk to to steipete.me/posts/just-tal…
English
16
20
284
10.4K
Ilpo Leppänen retweetledi
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Background Self-Improvement Loop After your response is delivered, a separate review agent spawns and it decides what's worth remembering and what should become reusable skills, then exits. Hermes gets smarter over time without getting distracted from your work.
English
10
3
108
6.5K
Ilpo Leppänen retweetledi
Ole Lehmann
Ole Lehmann@itsolelehmann·
i deleted half my Claude setup last week and every output got BETTER sounds backwards, but anthropic's own team just explained exactly why it works. here's the one prompt that tells you what to cut (and you don't even have to paste anything): this is what happens to everyone... you get a bad output, so you add a rule to your skills. "be more concise." next week, another bad output. another rule. "use a casual tone." but a month later, something else breaks. "always explain technical terms." you keep stacking, and it feels productive because you're fixing problems as they come up. but 3 months in, you've got 30 rules piled on top of each other. some of them contradict each other ("be concise" and "always explain your reasoning" are fighting). some of them fix problems that the model doesn't even have anymore. and the model is trying to follow all of them at once, which means it's doing none of them well. it's like handing a chef a 47-step recipe when they only need 12. the extra 35 steps slow the chef down, make them second-guess the parts they already know, and the dish comes out worse than if you'd just let them cook. that's what over-prompting does. anthropic just published a piece on how they build claude code (the ai coding agent). their own engineering team found that their scaffolding was making the ai worse which means your custom instructions are almost certainly doing the same thing. so here's the actionable move... instead of manually reading through your setup line by line, just tell claude to audit itself. if you're in claude's desktop app, claude already has access to your: claude[.]md (the file where your preferences and rules live), your skills folder (where your reusable instruction files are stored), your context files, everything. just open claude code/cowork and say this: — "read my entire setup before responding. check my claude .md, every skill in my skills folder, every file in my context folder, and any other instruction files you can find. then go through every rule, instruction, and preference you found. for each one, tell me: 1. is this something you already do by default without being told? 2. does this contradict or conflict with another rule somewhere else in my setup? 3. does this repeat something that's already covered by a different rule or file? 4. does this read like it was added to fix one specific bad output rather than improve outputs overall? 5. is this so vague that you'd interpret it differently every time? (ex: 'be more natural' or 'use a good tone') then give me a list of everything you'd cut with a one-line reason for each, a list of any conflicts you found between files, and a cleaned up version of my claude.md with the dead weight removed." — one message. claude goes and reads your entire setup, audits it, and comes back with exactly what to cut and why. you don't dig through files, you don't read every rule yourself. it does the whole thing. once you get the results, don't just blindly delete everything it flags. here's the process: 1. read what it flagged and why 2. delete the flagged rules 3. run your 3 most common tasks with the trimmed setup 4. did the output stay the same or get better? the deleted rules were dead weight 5. did something specific break? add back just that one rule the goal is to find the minimum viable setup that gets you the output you want. your ai setup should be getting simpler over time. addition by subtraction baby
Ole Lehmann tweet media
English
86
107
1.7K
242.4K
Ilpo Leppänen retweetledi
Ethan Mollick
Ethan Mollick@emollick·
GPT-5.4 Pro continues to be the only model of its class. For anything really hard & complex, I throw it into the maw with every bit of context I can think of. More often than not, something very useful comes out. I can't get the same results from Codex or Code or anything else.
English
173
113
2.3K
781K
David Soria Parra
No more context bloat from unused MCP servers in your context. Claude, Cowork and Claude Code now load MCP tools on demand. Before and after:
David Soria Parra tweet media
English
42
35
330
32.1K
Ilpo Leppänen retweetledi
Luma
Luma@LumaLabsAI·
Uni-1 is here! A new kind of model that thinks and generates pixels simultaneously. Less artificial. More intelligent.
English
423
767
7.6K
4.2M
Ilpo Leppänen retweetledi
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
Ghostty just surpassed Terraform in stars (my previous most-starred project I started). It took Terraform 12 years to reach 48K. Ghostty did it in 1 year. It's bigger than Terraform in active usage, too. I take it personally when people doubt I can outdo my past. I can take credit for starting both, but not for ongoing development (for the successes and failures). Neither project is a solo endeavor. I'm still extremely actively involved with Ghostty, but there's also a team of a dozen maintainers. Terraform I stepped back and stopped working on it directly like 6 or more years ago. I consider stars a vanity metric and I don't care about it at all except in this narrow case. I'm a super competitive person (in general), but particularly/especially against my past self. There's no one I like "winning" more against than my past. So, this is my one exception for caring about stars.
Mitchell Hashimoto tweet media
English
132
99
3.9K
171.9K
Ilpo Leppänen retweetledi
Chris Tate
Chris Tate@ctatedev·
Introducing Generative TUI Ask anything - get polished dashboards with real data, rendered live in your terminal. 27 components. Streaming. json-render + Ink. npx skills add vercel-labs/json-render --skill ink
Chris Tate tweet media
English
98
247
2.6K
200.6K
Ilpo Leppänen retweetledi
0xSero
0xSero@0xSero·
Tons of people followed me in the last 3 days, here's my most important video for learning how to work like me. youtu.be/VgR66ybAtdg
YouTube video
YouTube
English
18
43
638
38.9K
Ilpo Leppänen retweetledi
Thariq
Thariq@trq212·
I put a lot of heart into my technical writing, I hope it's useful to you all. 📌 Here's a pinned thread of everything I've written. (much of this will be posted on the Claude blog soon as well)
English
222
740
7.1K
866K
Min Choi
Min Choi@minchoi·
Instead of watching 2 hour Netflix movie tonight, watch this OpenClaw FULL COURSE. It'll take you from "installed" to "actually working."
English
77
99
792
82.2K
Ilpo Leppänen
Ilpo Leppänen@ileppane·
@asidorenko_ I think so too! Running gpt-5.4 in @opencode over long-running threads is satisfying especially when getting functional results but the lack of detail it surfaces by default can easily make you blind to what's happening (or is not happening) in the back..
English
0
0
1
88
Alex Sidorenko
Alex Sidorenko@asidorenko_·
Codex app/T3 Code/Conductor should copy the file panel from Cursor Glass New agentic IDEs still need a file view, even if a simplified one. Opening another app for this is too much friction. Cursor nailed the ux imo
English
56
9
512
67.2K
Ilpo Leppänen retweetledi
Codex Changelog
Codex Changelog@Codex_Changelog·
🚀 Codex 0.116.0 is out! 🔐 TUI: ChatGPT device-code sign-in support 🔌 Plugin auto-install prompts and remote sync 🪝 New hook to block or modify prompts 🎙 Realtime starts with recent thread context Changelog: github.com/openai/codex/r…
English
10
62
1.3K
92.7K
Kangwook Lee
Kangwook Lee@Kangwook_Lee·
Here are the slides from my talk at UC Berkeley BLISS Seminar! "Toward More Efficient and Useful LLM Agents" Covers: context engineering, skills, recursive LMs, the Ralph loop, test-time scaling, AutoResearch, OpenClaw, and lessons from shipping Terminus-KIRA, PUBG Ally, and Smart Zoi. Slides: kangwooklee.com/talks/2026_03_… Monograph: kangwooklee.com/talks/2026_03_…
Kangwook Lee tweet mediaKangwook Lee tweet mediaKangwook Lee tweet mediaKangwook Lee tweet media
English
9
39
245
11.8K
Ilpo Leppänen
Ilpo Leppänen@ileppane·
Small snippet for adding it onto model picker top favorites: STATE_FILE="$HOME/.local/state/opencode/model.json" BACKUP_FILE="$STATE_FILE.bak" TMP_FILE="$(mktemp)" cp "$STATE_FILE" "$BACKUP_FILE" jq --arg p "cursor" --arg m "composer-2" \ '.favorite = ([{"providerID": $p, "modelID": $m}] + ((.favorite // []) | map(select(.providerID != $p or .modelID != $m))))' \ "$STATE_FILE" > "$TMP_FILE" && mv "$TMP_FILE" "$STATE_FILE" echo "Added cursor/composer-2 to favorites." echo "Backup saved to $BACKUP_FILE" echo "Restart OpenCode if it is already running."
English
0
0
2
411
Fireworks AI
Fireworks AI@FireworksAI_HQ·
Cursor's Composer 2 beats Opus on TerminalBench at a fraction of the cost. Big results like this don’t happen by accident. Fireworks is proud to have powered the inference and RL scaling behind Composer 2.
Dmytro Dzhulgakov@dzhulgakov

Composer 2 beats Opus on TerminalBench at a fraction of the cost. The ingredients: coding focus only, data flywheel, cracked RL team, and infrastructure that can keep up. @FireworksAI_HQ powered the inference and RL scaling behind Composer 2. Scaling RL is still genuinely hard, and we're proud we could help make it less so. Congrats to @cursor_ai on shipping a great model!

English
5
9
179
33K