Zach Brock
187 posts

Zach Brock
@z
Member of Technical Staff @openai.
San Francisco, CA Katılım Mart 2007
533 Takip Edilen24.6K Takipçiler
Sabitlenmiş Tweet


Learning how to use coding agents effectively is the most interesting engineering problem in the world right now.
The solution @alex_frantic came up with for our team is Symphony.
I think Symphony has a few really interesting ideas embedded in it:
1. The approach itself. Giving coding agents access to task tracking and changing their goal to "convince a human to merge this code" is the clear next phase of software engineering.
2. Software as a spec. Instead of code, Symphony is first a spec.md that you can materialize into any programming language you want by passing it to your coding agent of choice. This is an early demonstration of a new way I expect open-source software to be developed and shared in the future.
3. Lowering the cost of code. When reliably kicking off a feature or bug fix is something you can do from your phone in a few seconds, it radically changes your relationship with product prioritization and exploration.
Read the whole blog post below and let me know what you think.
English
Zach Brock retweetledi

@alex_frantic really did an incredible job with github.com/openai/symphony
English

One easy trick to connect 25 Codex agents to Linear and increase your PR throughput by 500% that THEY don't want you to know about
openai.com/index/open-sou…
English

anyway enjoy 5.5 its really good
Zach Brock@z
People who join OpenAI are always surprised to learn that we basically ship stuff as soon as it’s ready
English

Incredibly impressed with the care and craft the team put into this product. It's been a lot of fun to chat with all the (mostly useful) bots people have built internally
OpenAI@OpenAI
Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows across tools and teams.
English
Zach Brock retweetledi
Zach Brock retweetledi

@nk Got any example conversations? Happy to pass to the team cc @ericmitchellai
English

@tszzl @kimmonismus Can you just skip to 5.4 so I can get some sleep.
English

Seriously, I dont get it.
- Today, GPT-5.3 instant is being released .
- The blog post states at the very bottom that 5.3 Thinking and Pro will also be released very soon.
- An hour later, the official OpenAI X account tweeted that GPT-5.4 will be released very soon.
???
So in a few days we get: GPT-5.3 thinking + Pro + GPT-5.4 (???) instant/thinking/pro?
English

I’ve been trying to simulate using Codex for the next year and what will change about my perspectives on software engineering as I transition from being a computer programmer to a harness engineer. There are so many, but here are a couple that have stuck with me:
Software dependencies - Large open source systems like Linux and MySQL seem like they will remain just as important, but I wonder if I will start to have different perspectives on smaller software libraries when the functionality can be relatively easily produced and tested with AI. Given the past decade of supply chain vulnerabilities and maintenance issues in open source libraries, will it become a best practice to reduce dependencies and write our own where possible?
Documentation - When I built a product before, the “specification” was split between docs, Slack, Figma, and Linear — but the vast majority of behavior was specified in code, i.e., the long tail of functionality is an emergent property of the code I write. The conundrum with agent-produced code is that it’s not clear which parts of the code were prompted (i.e., specified) and which parts were “vibed” (i.e., unspecified). That seems problematic when continuously evolving a large system over time because the harness will “forget” past instructions. I don’t think replaying prompts is correct either because in a single Codex session, a good chunk of interactions are interactive and effectively transient. I have an intuition that documentation will be as important of an output of my Codex sessions as code, documenting the substantive product decisions made during my session. Those docs clearly need to be directly in the repo, versioned with the code and available as context for future sessions. The docs / context discussion in OpenAI’s recent post on harness engineering resonated with me and maps to my intuition: openai.com/index/harness-…
English

@martin_casado We wrote a bit about how we approached this problem here: openai.com/index/harness-…
English

The difference in stuff AI coding is good at vs not is getting more stark.
It's very good at basic engine stuff you'd want to build on the way. Tooling, testing, basic engine design. Frameworks etc.
But it's really not good at anything where runtime understanding is important. I've seen this working on a splat renderer and multiplayer back end for a game engine. In both cases, AI creates a pretty reasonable guess. But lacking actual understanding of the runtime semantics, the results are basically unusable.
This produces somewhat of a dilemma. The better it is at the stuff you can derive optimally from syntax, the more disconnected we are when we actually need to design around runtime semantics.
To manage this, I've started to include both schema design, notes on state consistency and runtime traces to the LLMs. It's not perfect, and I still need to be in the code a lot. But it helps to start pulling semantic dependencies like this out.
English

V2.1 of @slashlast30days is out. Now with @OpenClaw, free @YouTube transcripts and a Codex Skill.
1. @openclaw + watchlists - automated research via cron jobs on your competitors, people, and topics
2. YouTube transcripts as a 4th source
3. Works in OpenAI Codex
English










