
Magnus
180 posts



@d4m1n i'm a bit confused why so many people say api tokens are sold at a loss this isn't true - these models are incredibly expensive compared to the gpu time cost there's potential for 90% margin depending on the model



Registered a new domain a couple days ago and it already picked up 50 scammy referring domains. If building systems to handle this isn't on your radar yet, NGMI.




We've reached an agreement to acquire Astral. After we close, OpenAI plans for @astral_sh to join our Codex team, with a continued focus on building great tools and advancing the shared mission of making developers more productive. openai.com/index/openai-t…



And when I say Vibe Coding.. I mean, vibe CODED. Launching a free trial next week - after my SMX Paris session - to a few followers, hit me up in DMS.


Question for AI engineering community: what is the current best practice for giving a single agent access to a potentially unbounded number of skills? Goals are (in priority order) 1. Maximize skill use accuracy 2. Minimize context use 3. Minimize unnecessary tool calls


Though bash is a completely valid REPL, the amount of time coding agents lose during experimentation because they iterate on scripts instead of a Jupyter-like in-memory REPL is basically dumb. Fixing 1 local bug should not require restarting the whole job. Need better scaffolds.



We are investigating reports of higher usage drain than expected for Codex when WebSockets are enabled, the team is investigating and we will provide updates as we go












