Sergii Guslystyi

1K posts

Sergii Guslystyi banner
Sergii Guslystyi

Sergii Guslystyi

@JuiceSharp

Software Architect • AI Systems that Work • Productivity tools | Chess • Father on the journey🙏✨

Florida, USA شامل ہوئے Ocak 2010
245 فالونگ279 فالوورز
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
I am migrating our workflows from commands/subagents to the skill-based variation. So let’s say I have a skill A that should use some other specialized skills B & C. Those two are not available to direct execution by humans and must have an isolated context, so both are marked: user-invocable: false context: fork So the skill A has explicit instructions to use Skill Tool B & C. After running a skill A, one delegates to a skill B via Skill Tool, and the skill B unfortunately is loaded into the main context of the skill A. Might be you can clarify @trq212 Am I doing anything wrong? Should I still be forced to use Agent Tool (explorer, general-purpose) instead of Skill Tool inside of A skill to archive the desired result? I was under an impression that if Skill Tool calls a "forked" skill, that MUST be forked.
English
0
0
0
27
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
rentry.co/kxe54p88 The /annotate-project command analyzes the existing codebase and generates all files in one pass (3 stages with verification and defenetly can be improved or tuned for a purpose). Same command to regenerate if the architecture changes. No manual writing required. On Updates: Those are rarely needed (or maintained by the team lead, architect). The content is architectural guardrails, not implementation details. If you're not listing filenames or utility deps, what goes stale? The base class hierarchy? The dependency flow between layers? Checklist? These change maybe once a year in a brownfield project. When they do change, it's an architectural decision worth documenting anyway. A/B Testing: I haven't run a formal A/B, but the value is observable in several ways: Context window efficiency. CC auto-loads CLAUDE.md as it navigates into directories. Without them, the agent (subagents) spends tokens reading 10-20 source files to infer the architecture before it can start working. With them, it gets the architectural understanding in 70 lines and spends the remaining context budget on actual job. On a complex project, that's the difference between the agent having room to complete a task vs running out of context mid-way. Cross-layer knowledge. Here is an example An agent reading DataRepository.cs has no way to know it also needs to register a BSON class map in DataLayerConfiguration and add a migration in Initializer. The files encode connections between layers that don't exist in any single source file. Session-to-session consistency. Every CC conversation starts fresh. (Even research_codebase to some extend) without these files, each session rediscovers the architecture by reading whatever files it happens to find and might infer differently each time. With them, every session starts with the same canonical understanding. Scaling autonomous operation. The more you want CC to work autonomously, the more it needs correct architectural understanding upfront. The files aren't documentation in the traditional sense... They're steering instructions for the agent. That's why they contain pattern shapes with code examples and boundary constraints rather than explanations. They exist to constrain decision-making, not to teach. Architectural glue that prevents the agent from going its own way.
English
0
0
0
11
dex
dex@dexhorthy·
cool if its working for you and you're able to keep all the sub-mds up to date that's great - how often do you update them? I know you say it doesn't go out of date etc. have you A/B tested if they add value? like try the same workflow without them? how do you generate them? is it just pattern follwoing? do you have a claude md guidance to update the file during regular operation or is it a separate workflow step
English
1
0
1
62
dex
dex@dexhorthy·
we've been trying a bunch of stuff. this one kinda works.
dex tweet media
English
29
29
427
41.4K
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
Thanks for sharing .. I respectfully disagree with both points from the presentation (work harder to find better reasons :)) 1) "Gets out of date" Only if the files contain things that change. Ours don't. We document architectural guardrails: layer boundaries, dependency flow, pattern shapes, "Adding a New X" checklists. We exclude filenames, utility dependencies, linter rules, and anything discoverable from reading code. In brownfield projects with established architecture, these guardrails have a half-life measured in years, not weeks 2) "Too long or under-specified" This is a real problem for a single monolithic file, but distributed files eliminate the tradeoff. Each subfolder file is capped at 100 lines, scoped to one layer. The root's sections are project map, dependency flow, commands, and cross-layer conditionals that reference subfolder files for details. Short to be fully consumed, specific enough to be actionable Subfolder 's CLAUDE.md looks like: 0) Responsibility: 1-2 sentences: what this layer (subfolder) does 1) Dependencies : only layers that shape how you write code 2) Consumers: who uses this layer and how 3) Module Structure : 4-7 entry tree with architectural annotations, no file lists 4) Pattern (with code): idiomatic example showing the shape, not a copy-paste 5) Boundaries: "NO direct DB access", "NO Result in repos" : "Adding a New X" checklist, only shown when relevant Nothing in that outline goes stale unless the architecture itself changes! Works like a glue for the agent. P.S> I've run this on multiple complex enterprise bf codebases, the pattern holds.
English
1
0
0
54
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
@dexhorthy I found this approach (multiple CLAUDE.md (s) works surprisingly well in the projects I've worked on since July of last year adopting spec driven development in our organization. In practice, Claude Code reads the correspondent CLAUDE.md files and only touches files in a specific folder, so this approach to layered memory is lazy (read on demand), highly concise (progressive disclosure based on the hierarchy as all files are short, under 100 lines), and quite easy to create (see above link)/maintain project-wide, as they follow the natural structure, app layers and established project architecture and patterns. Did you try this approach on your projects? I want to know why you rejected or did not adopt the approach.
English
1
0
1
40
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
rentry.co/kxe54p88 Here is another approach (alternative to the skill) which is fully compatible with RPI as one utilizes the existing subagents (locator and analyze). Templates are extended with your conditional tags (where appropriate). Please note subfolder s files are focused on guardrails and architecture so drastically improve the general experience including the maintenance while primary CLAUDE.md focuses on general guidance. Interested in your opinion.
English
1
0
0
15
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
Agree with most of this, especially "the word spec is broken." I've been running RPIV (Research, Plan, Implement, Validate) with AI agents handling each phase, and I'm seeing the same tension from the inside. The plan phase in RPIV is not trying to be a spec-that-replaces-code. It's closer to your "something that lets you resteer the model before it slops out N-thousand LOC." The agent researches the codebase, proposes an approach with specific files and patterns, and the human approves or redirects before a single line is written. So it's not a spec. It's a checkpoint. Where I'd push back: "go back to owning the execution" doesn't scale either. The real problem I keep hitting is comprehension debt. Every cycle where you skip deep engagement, your ability to catch problems in the next cycle erodes. It compounds. The decel way preserves understanding but kills throughput. The accel way ships fast but quietly hollows out the people doing the reviews. So the actual challenge isn't spec vs no-spec or accel vs decel. It's how do you keep humans learning inside a process that's designed to remove the friction that learning depends on.
English
0
0
1
431
dex
dex@dexhorthy·
damn this is so good and encapsulates everything I've been seeing/saying in the last few months - a spec that is sufficiently detailed to generate code with a reliable degree of quality is roughly the same length and detail as the code itself - so don't review those things, just review the code at that point, if you care enough about that level of abstraction - unless you're vibing side projects or prototypes (yes, even zero-to-one software), you ABSOLUTELY SHOULD care about the code at that level of abstraction - you need to find SOME way to get more leverage over coding agents though, because just reading all that code is a pain, esp when a lot of it is slop - the default/dare-i-say-decel way is to go back to "i own the execution, and give little things to the agent, check it along the way" - the accel-but-safe-way is to find something - NOT A SPEC (the word "spec" is broken anyway) - NOT 3 INVOCATIONS OF AskUserQuestion - that lets you resteer the model *before* it slops out N-thousand LOC
gabby@GabriellaG439

New blog post: "A sufficiently detailed spec is code" I wrote this because I was tired of people claiming that the future of agentic coding is thoughtful specification work. As I show in the post, the reality devolves into slop pseudocode haskellforall.com/2026/03/a-suff…

English
32
30
532
251.7K
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
I am a huge AI proponent and have spent the last few years on governance of AI projects and AI initiatives inside the company. Some of them could be considered successful. I agree with this assessment, but there is an even bigger issue … developers are starting to lose the expertise to control and review what AI produces. Under normal circumstances a developer writing code spends most of the time building knowledge - domain-specific knowledge, project-specific knowledge, understanding of edge cases and system behavior. That friction is not a waste, it is how expertise forms. AI speeds up production, but to review the output to the same quality level is almost impossible without that comprehension … huge waste of time, or worse, a false sense of confidence. So there is a well known trade-off baked into the process. You cannot have cheap, high quality, and fast simultaneously. Winning on speed we are losing quality. Fighting for quality we have to spend more resources or sac the speed. And with AI this dynamic is getting worse over time - the people responsible for control stop evolving and developing along with the tasks they are supposed to govern. Every cycle where a developer skips deep engagement, their capacity to catch problems in the next cycle erodes a little. It is not a static trade-off. It is a compounding one. You accumulate comprehension debt - not in the codebase, but in people’s heads. And unlike technical debt, almost nobody is tracking it.
Ujjwal Chadha@ujjwalscript

Unpopular Opinion: We aren't building the future 10x faster with AI. We are just generating legacy code 10x faster. Everyone is currently bragging about developer velocity. "I built this entire backend in a weekend!" "AI wrote 80% of my codebase!" But here is the reality check we are ignoring: Code is a liability, not an asset. If an AI tool spits out 1,000 lines of functional boilerplate in five seconds, that is still 1,000 lines that a human being has to read, review, secure, and maintain when the dependencies inevitably break next year. We are treating code generation like a pure productivity win, but we are optimizing for the wrong metric. The bottleneck in software engineering was never how fast we could type. The bottleneck has always been comprehension, architecture, and maintenance. If we don't shift our focus from "generation speed" to "architectural sanity," the tech debt of the next five years is going to be an absolute, unmaintainable nightmare.

English
0
0
0
83
Gary Marcus
Gary Marcus@GaryMarcus·
Important comments from Jack Shanahan, a retired US Air Force General who was first Director of the first Department of Defense Joint Artificial Intelligence Center. You may need to click to see the whole thing.
Gary Marcus tweet media
English
55
624
1.8K
156.4K
Excelsior
Excelsior@RavS82·
@TheCineprism And there are only 2 more episodes left. HBO better not leave fans waiting 2 years with these short seasons and short episodes
English
3
0
8
8.3K
The Cinéprism
The Cinéprism@TheCineprism·
Only five episodes in and it’s already the best storytelling this year.
English
231
5.1K
46.5K
2.2M
Melissa S. Kearney
Melissa S. Kearney@kearney_melissa·
Our federal government spends 5x more per capita on the elderly than on kids. Seniors have more wealth than any other age group. Young couples can’t afford to buy homes. US fertility is plummeting. Our federal debt is out of control. And yet -
Senate Republicans@SenateGOP

America’s seniors will see a new $6,000 bonus exemption as a part of the Working Families Tax Cut. That’s $93 billion in tax cuts for seniors all over the country.

English
174
529
4.6K
179.3K
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
Count any other country where you should not prove your citizenship during the vote. If this is a normal practice across the globe, let's adopt one, as this will make republican party happy (I believe fairness of the election; especially with results can come too close has to be the priority for both parties ). Now, the claim that there is "not a single case" is simply false. The Bipartisan Policy Center found 77 confirmed cases of noncitizen voting between 1999 and 2023. Michigan found over a dozen cases in 2024. The DOJ just prosecuted three more cases this month. So yes, it happens.
English
0
0
0
46
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
@johnkonrad We did not have an issue with Canada or any other our allies for decades until recent. Think about that.
English
5
1
48
6.2K
John Ʌ Konrad V
John Ʌ Konrad V@johnkonrad·
After Mark Carney’s latest China deal, I’m starting to wonder if Greenland isn’t just about stopping Chinese ICBMs… …but about making sure our ICBMs don’t get shot down by Canada.
John Ʌ Konrad V tweet media
English
505
621
4.9K
325.9K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I love that Google is helping save Tailwind. LLMs are trained on open-source projects. OSS gave the world so much. Hope more big tech companies do this.
Yuchen Jin tweet media
English
72
140
3.6K
154.8K
Sergii Guslystyi
Sergii Guslystyi@JuiceSharp·
As a Republican, I am happy that at least one competent man exists inside the current administration. Prediction level 10 out of 10
Marco Rubio@marcorubio

#Putin still wants to capture #Kyiv & install a puppet govt But when he realizes that’s not feasible he will: 1. Focus on destroying as much of @DefenceU as possible 2. Then offer cease fire that imposes neutrality on #Ukraine & recognizes #Crimea & #Donbas as part of #Russia

English
0
0
1
130
dax
dax@thdxr·
what happened to the TOON/GOON/whatever format we have a bunch of places where we return json to the LLM that i meant to come back to i know it feels like "json is in the training data" but the point of LLMs is they can figure out stuff they haven't explicitly seen
English
37
1
152
17.4K