kustrun

173 posts

kustrun banner
kustrun

kustrun

@kustrun

prev 👨‍💻 @kriptal_io

가입일 Şubat 2023
834 팔로잉122 팔로워
kustrun
kustrun@kustrun·
@svpino Agree! This works well for unit tests where everything is in the local repo and available to the agent. For E2E tests across multiple repos and services (potentially external!), it gets more challenging - you need to prepare clear inputs and expected results upfront.
English
0
0
0
40
Santiago
Santiago@svpino·
Agentic coding is the ideal mechanism for enforcing TDD and being strict with it. Here is a summary of the workflow I coded as part of my implementation skill in Claude Code: 1. Before writing any code, write a test that fails 2. Run the test and ensure it fails 3. If you get an error, fix the test until it runs but fails 4. Once the test fails, write the code 5. Run the test and ensure it succeeds 6. If the test fails, go back to step 4 7. Once the test succeeds, verify that the task is complete 8. If the task is not complete, go back to step 1 This loop forces the agent to write tests before writing any code, and tries to keep each test as simple as possible. This is token hungry upfront, but it has many advantages: 1. Simpler code 2. More modular code 3. Fewer bugs and regressions later 4. Easier to troubleshoot
English
52
14
187
17.5K
kustrun
kustrun@kustrun·
@johncodes The more we let AI handle the coding, the less we truly understand what's being built - until one day we realize we've only been supplying ideas, and we don't know the code details anymore. It's a quiet trade-off that can slowly sneak up on you. 🐸
English
0
0
0
33
John McBride
John McBride@johncodes·
I've experienced something that makes me want to stop all my AI agent coding use: 1) I see elevated "Overloaded" and 529 errors from Anthropic. 2) I check and, yes, Anthropic is down. I then legitimately feel like I can't do anything. Crippled. And before you reply "har har skill issue!", I'm ex-faang, have nearly 10+ years of experience building cloud and infra technologies, successfully exited a startup last year, and I maintain open source software you rely on every single day. And yet, I feel crippled when I can't rely on the big ol "agent do a thing" button. Over the last 2 months, building a new company with my co-founder, I've leaned in: like, REALLY leaned in. Probably 90% of what we've been shipping has been AI generated - it's not all perfect but it's been a really good way to go from 0 to 1 and get early validation in the market. But, if I can't rely on tokens, if I feel crippled by a lack of inference providers, where does that leave me? Have I outsourced my thinking and abilities to AI and, at worst, companies who want to sap my intelligence for a chance to keep me dependent? This is no different with open weight models on local hardware: what happens if a GPU falls of the rack (and trust me, this hardware dies all the time, back when we operated a whole T4 GPU inference RAG pipeline, these would die and get dropped from the cluster all the time)? Open weight models also aren't open source: I can't study the model or its training data, I can't reproduce it, I can't make modifications, ad the licensing is anything but free (as in freedom). This is like having an IDE that has a very very steep price to pay AND is dependent on cloud services to be "on". I don't think this is a good paradigm for the industry at large.
John McBride tweet mediaJohn McBride tweet media
English
16
0
32
6.8K
kustrun
kustrun@kustrun·
@adocomplete @_catwu Agree! It would be very helpful to have an alternative option to restore a stashed prompt. Because sometimes it gets lost. It is not in the history (can't be found with the ⬆️/⬇️ keys) because it was never submitted and the stash is cleared. So you need to retype it. 🙁
English
0
0
0
32
Ado
Ado@adocomplete·
@_catwu ctrl+s ftw!
English
1
0
6
1.7K
cat
cat@_catwu·
my three favorite claude code shortcuts: 1. `!` prefix runs bash inline. the command + output land in context 2. `ctrl+s` stashes your draft. type something else, submit, and it pops back 3. `ctrl+g` opens the prompt (or plan) in $EDITOR for bigger edits
English
45
63
1.2K
90.6K
kustrun
kustrun@kustrun·
@alexoakdev Something I noticed - the biggest token burn happens when I run out of tokens in the current 5h session and prompt "continue" in new session to continue. Even though nothing happened yet, usually 10-20% of the tokens are already gone. Doubt context reloading uses that much.
English
1
0
0
1.6K
Alex Oak
Alex Oak@alexoakdev·
Does anybody else feel like Claude Code keeps nerfing their usage limits??? I have the $100/month plan and 30 minutes of coding now uses up 60% of my session limit... A few months ago this would have only used like 5% I think. Anybody else feeling the same way?
Alex Oak tweet media
English
518
44
2.3K
364.4K
kustrun
kustrun@kustrun·
@eplurubusnullus @theo Smart hack! 💰 Nothing wrong with getting the most out of what you're paying for if the option is right there. And if it's a problem, they could easily limit just the power users doing this. No reason it should affect everyone else.
English
1
0
0
66
Eplurubusnullus
Eplurubusnullus@eplurubusnullus·
@kustrun @theo They’re trying to stop people from “buffering” a side project prompt while they actively work another project or two, which they run near the end of the 5h mark in order to get their money’s worth.
English
2
0
2
106
Theo - t3.gg
Theo - t3.gg@theo·
Claude Code has regressed an absurd amount in the last few days. Timestamps no longer update unless you un-focus/re-focus the tab. "thinking" doesn't show at all. I had a query run for 6 minutes with 0 output. This is genuinely unpleasant to use.
English
273
39
1.9K
523.9K
Georgios Konstantopoulos
Georgios Konstantopoulos@gakonst·
other ppl got ahead of me since last lunch break and i need to resist the nerdsnipe, but respect @AmpCode is by far for me and the rest of Tempo eng team the best agent we've used. also epic to see @__tinygrad__ in the leaderboard, big fan and think they're on to something.
Shual@0xShual

> anthropic releases a new notoriously difficult exam > 3 hours in, only 6 people are on the leaderboard > basically the entire @paradigm team lol interestingly they are yet to beat opus 4.5 oh and only @gakonst so far is under the recruiting threshold jfc

English
12
2
155
25K
kustrun
kustrun@kustrun·
@gregpr07 Gastown (github.com/steveyegge/gas…) is very promising and rising in popularity. Still in alpha so you might hit some minor hiccups, but overall an awesome tool! And the author @Steve_Yegge has a real talent for writing blog posts - reading them is very enjoyable.
English
0
0
0
42
Gregor Zunic
Gregor Zunic@gregpr07·
Last few days all I’ve been thinking about is how I can run agents 24/7. What’s the best background agent orchestration layer?
English
58
6
239
64.7K
kustrun
kustrun@kustrun·
No more excuses. ‼️ Dive into Claude Code. Learn to prompt. Become an active citizen. Learn to work with AI, or get left behind. 🚀
English
0
0
1
42
kustrun
kustrun@kustrun·
We all report bugs 🐞 and feature requests 📜. This is just the first step! Agentic tools like Claude Code 🤖 now let us go one step further - jump into unknown codebases and actually help make them better. 🧵
English
1
0
1
110
kustrun
kustrun@kustrun·
Was looking for a task manager tool for CC when I discovered beads (github.com/steveyegge/bea…). The next day, gastown (github.com/steveyegge/gas…) launched - and it solves the exact problems I have been struggling with. Impressive work ‼️THANK YOU for building and sharing this. 🙏
English
0
0
0
43
Steve Yegge
Steve Yegge@Steve_Yegge·
Day 3 of Gas Town. Community is taking off like fire, and people haven't even come home from the holidays yet. steve-yegge.medium.com/the-future-of-… is my latest post on Gas Town and how it's going to shape agentic coding in 2026.
English
56
15
299
69K
Brooke Lacey
Brooke Lacey@brookejlacey·
And if you find yourself (like I have) balls deep in AI/ClaudeCode Twitter (AICCT 😆) just start replying with comments and questions on anyone post and bam! You’re in the community It’s so fun here!
Numman Ali@nummanali

There is no private "community" for Claude Code Ignore all these private channels that you need to request to join - it's bullshit X is the only place you need to be - like others have said, we're all in a big group chat Plus, you have the CREATOR of Claude Code @bcherny here!

English
5
1
10
794
kustrun
kustrun@kustrun·
@fede_intern @ethereum Impressive progress! 🔥 While checking the metrics on ethproofs.org there is a sharp drop in proving latency (from ~5m to ~30s) and cost (from ~0.08 to ~0.015) around October 27th. Would you happen to know if there was a major breakthrough or change at that time?
kustrun tweet media
English
0
0
0
37
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
Proving an @ethereum block costs about one cent today. At the beginning of 2025 it cost around 1.7 dollars. That is a 170x reduction in cost. Proving time has also dropped from roughly 16 minutes to about 1 minute, using a single GPU. That is a 16x reduction in latency, achieved in just one year. The public and private coordination around Ethereum, driven by the @ethereumfndn together with the @eth_proofs effort, is likely one of the most successful initiatives in the entire industry. Thanks to the sustained investment Ethereum has made in cryptography, zero knowledge is about to become massive well beyond blockchains too. This should happen in the next 3 to 5 years. Alternative L1s that focused primarily on increasing hardware requirements were clearly mistaken and were very short term sighted at criticizing Ethereum for it's correct strategy. Lean Ethereum @leanEthereum, that has a clear ZK strategy, is going to surprise a lot of people. After its presentation in Bangkok, it was criticized for taking years of research, often by people who did not understand that most of this work had to be specified and deeply researched first. Most competitors of Ethereum will end up reusing. I have been consistently critical of certain engineering choices and of the political and social layer of our lovely Ethereum. However I'm extremely bullish on Ethereum’s research direction and roadmap, to the point that those issues become secondary. Lambda, together with its partners, will deliver a strong zkVM and a serious execution client (@ethrex_client) and lean client (ethlambda). Viva Ethereum!
Fede’s intern 🥊 tweet media
English
16
20
194
14.7K
kustrun
kustrun@kustrun·
@adocomplete Adding an option to "debug" individual parts to see what is consuming tokens would be incredibly valuable. Or even suggesting improvements/optimizations! At the moment, I believe you have to inspect files manually to figure that out, right?
English
0
0
0
23
Ado
Ado@adocomplete·
Advent of Claude Day 8 - The /context Command Want to see what's eating your context window? Type /context to see exactly what's consuming your token window including: * The system prompt * MCP servers * Memory files * and more It's like having X-ray vision for your tokens.
English
22
23
477
31K