Zach Gollwitzer

5.5K posts

Zach Gollwitzer banner
Zach Gollwitzer

Zach Gollwitzer

@zg_dev

⊟ Building @maybe ○ Running https://t.co/crOIdOlqX0, https://t.co/tKra6hDDCk

Cincinnati, OH Katılım Aralık 2020
486 Takip Edilen3.2K Takipçiler
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@aarondfrancis “That’s a really important distinction. You’re not only thinking about the code, but also demonstrating great management skills”
English
0
0
2
42
bdougie on the internet
@zg_dev Paragraph 1 and 2 is tapes. Paragraph 3 is interesting and the value users will get from tapes. Let me know if you’d be interested in a demo for what we are shipping next.
English
1
0
1
17
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
Prob being worked on already, but seems like the next step for agent harnesses is to ship with a low cost, built-in logger that can be hooked up to any sink Every key idea, decision, prompt, synthesized in an append-only commit log (think Kafka-esque), with standard message structure, in source control for later context retrieval The “prompts as spec” only makes sense if they can be processed and transformed later. No human or agent can handle a never ending stream of prompts that may or may not reflect reality, but you could certainly extract some value from a structured log of non-code changes (not trying to reinvent git here) that can be piped through any transformations (think automated status updates to business stakeholders as the code is shipped)
English
3
0
2
177
BekahHW
BekahHW@BekahHW·
@zg_dev You should check out tapes.dev. Not exactly what you’re talking about, but I think it takes it a step further. Also, check out what @bdougieYO’s been doing with it. He has a couple of x articles and some on papercompute.com/blog
English
1
0
2
41
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
This could also just apply to code commits in general, so instead of the agent leaving trails of comments everywhere, the code itself stays uncluttered, and non obvious pieces of functionality are automatically documented and correlated in the log
English
0
0
0
42
Thorsten Ball
Thorsten Ball@thorstenball·
Lately, whenever I open this app and see the latest tricks, and hacks, and notes, and workflows, and spec here and skill there, I can't help but think: All of this will be washed away by the models. Every Markdown file that's precious to you right now will be gone.
English
94
41
787
94K
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@quangsg @rahulj51 Just generally talking about the test suite as the spec! I don't think historical industry adoption is necessarily indicative of where things are headed here. I never loved TDD myself, but when building with AI it makes a lot more sense. It's a "non-negotiable spec"
English
0
0
1
10
wangsg
wangsg@quangsg·
@zg_dev @rahulj51 what is the adoption rate of TDD in the industry .. dont confuse TDD and tests :)
English
1
0
1
17
Rahul Jain
Rahul Jain@rahulj51·
I don't know. Hard for me to agree with the spec-is-code argument. Specs are low fidelity. They aren't a common shared DSL. Have very low signal to noise. Not verifiable or deterministic. They don't encourage iterative work. Almost always written by an agent, thus prone to more slop. Lose their inherent value as soon as they are converted to code. Specs are throwaway. They are a way to temporarily express behavior intent - an agent translates that to a version of code. That's the thing that lives forever. If you are convinced that spec is code then you should be able to confidently delete all code and regenerate from specs.
English
19
2
55
8.5K
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
Yeah, I think there's a place for these artifacts, potentially committed to source control in an append-only log format. But they don't feel useful in the "hot path" of day to day development. I see these more as something you come back to as-needed, and let the AI agent explore them to figure out what happened in the past.
English
0
0
0
20
glylesterol
glylesterol@glylesterol·
@zg_dev @rahulj51 personally find specs useful to group together design + implementation choices + failed explorations + account for future use cases, all in one place. If it was in code this information is scattered over multiple files or would be a mess of (peripheral) comments.
English
1
0
1
20
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
Yeah I think I agree with all this, there's just a ton of risk embedded at each step given the uncertain latency of these tools that don't exist in a conventional interview where you're writing all the code yourself. At least for me, time-management with AI tools is wayyy harder in practice than I originally thought. They feel insanely fast and productive, but 45 min still feels extremely compressed even for AI agent work.
English
1
0
1
17
Jeremy Kreutzbender
Jeremy Kreutzbender@J_Kreutzbender·
@zg_dev @robzolkos Both paths, refactoring or debugging, should give a good signal to an interviewer that you have a grasp on how to interact with generated code
English
1
0
0
21
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
I’m torn on AI interviews. The candidate is forced to decide, “how hard do I push this AI”, which lives on a broad spectrum. Don’t use it at all? You risk being labeled “behind”. Use it too much? You risk being labeled “sloppy”. But as many who use AI know, the middle barely exists. It’s extremely hard to scope a unit of work that is 50% AI driven (esp in a fixed interview time period). You either commit to 10x the scope and heavily lean on AI and accept the “slop cost” in service of demonstrating your ability to think higher architecturally, or you take the 1x scope and lean in on your ability to demonstrate crisp, clean implementations. In the former case, an engineer who truly cares about great code is made out to be “sloppy” and careless, while in the latter, the engineer is made out to be, “too slow for the times”. So ironically, in a world where we are increasingly using AI to write our software, I’m not certain and confident that AI interviews are the best signal of talent. At least in my experience, I feel as though it misrepresents the candidate. I’m sure companies will learn how to incorporate it better as we move forward, but we’re in a very confusing moment right now.
English
1
0
0
225
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@rahulj51 Yeah, I think we’re all blending 3-4 different conversations into one. Net new features vs. greenfield codebases vs. very simple CRUD features vs. long-term maintenance of apps AI has a very very different role in each of these, but we all argue as if they’re the same
English
1
0
2
52
Rahul Jain
Rahul Jain@rahulj51·
Same. My very very cynical take is that a lot of Tech adjacent people are now able to code using agents and this is great. But they are also mostly building peripheral net-new features in an existing system (if not a completely new system), instead of building complex features that cut across multiple subsystems. And perhaps that's creating this bias towards specs - A spec written by an agent with all the code snippets etc make it feel like the agent knows what it's doing.
English
1
0
2
301
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
As in, you demonstrate that you see the slop, and address it proactively? I’d agree with that, but tremendously hard to do in a fixed interview time period given the variable latency of code generation and non-deterministic outputs. In my case, I ran out of time before I was able to address the slop, so just had to run with it and call it out.
English
1
0
2
40
Rob Zolkos
Rob Zolkos@robzolkos·
@zg_dev Reframe it to using it so much that I have systems and guardrails in place to detect and address slop/low quality.
English
1
0
0
82
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@robzolkos I go through a stack of 10-20 sheets of printer paper every week just sketching out ideas. Totally underrated, I agree.
English
0
0
2
38
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
It’s funny how through all the cycles of worktrees, subagents, skills, ralph loops, lobsters, and context engineering, a good ole default claude code instance + nvim/tmux + some hard human thinking still seems to accomplish more in the long run.
English
3
0
7
536
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@boristane You put the humans at the edges. Control the inputs, measure the outputs. A streaming data pipeline, where the messages are code fragments
English
0
0
0
26
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@boristane When you push code to prod, which you have not read, the only logical next step is to have an army of 2nd and 3rd line of defense observability tools to measure the “slop stream” (code as a stream), which helps allocate human devs more effectively
English
1
0
1
174
Polymarket
Polymarket@Polymarket·
BREAKING: META acquires Moltbook, a social network built for AI agents.
English
1.1K
1.3K
11.5K
6.4M
Zach Gollwitzer
Zach Gollwitzer@zg_dev·
@thdxr What inflection points have you guys had while building OpenCode in regards to this (changes in product vision, etc.)? It's a genuinely existential question and I think we're all feeling it so hard rn after this "experiment" we've been doing with our brains the last few years
English
0
0
1
722