Xbotter

572 posts

Xbotter banner
Xbotter

Xbotter

@Xbotter

.NET Engineer × AI-Native Builder Building Agents & Intelligent Workflows Helping enterprises land AI successfully. Code w/ 🐱

China Shanghai Inscrit le Mart 2012
423 Abonnements123 Abonnés
Tweet épinglé
Xbotter
Xbotter@Xbotter·
A brief formal introduction: With over 10 years in .NET development, I’ve fully shifted my focus to AI-native applications and enterprise AI workflow implementation. I specialize in turning LLMs, Agents into practical solutions with real business impact. Lately, I’ve been exploring tools like codex in depth. Outside of work, I’m a dedicated cat dad 🐱 Happy to connect on .NET + AI + codex topics.
English
0
0
1
99
Xbotter
Xbotter@Xbotter·
@sudoingX It's been exactly 4 years since the 3090 was released. So, what about your 3090?
English
0
0
0
118
Sudo su
Sudo su@sudoingX·
fun fact: 4 years of $20/mo to chatgpt = the price of a used 3090. one leaves you with nothing. the other leaves you with a 3090.
English
33
6
116
7.9K
Yahyavision — Logo & Brand Designer
If you had $20 to invest as a designer, what are you picking? – Codex – Claude Code Heard Claude hits limits pretty fast… but Codex isn’t always the most “creative.”
English
21
1
13
2.4K
Tibo
Tibo@thsottiaux·
You can now keep codex going for days. With GPT-5.5 it will build an entire OS kernel for you if you ask, or find critical bugs in a codebase, or optimize your database schemas, or… the options are endless.
Felipe Coury 🦀@fcoury

/goal also lands in Codex CLI 0.128.0. Our take on the Ralph loop: keep a goal alive across turns. Don't stop until it's achieved. Built by my co-worker and OpenAI mentor Eric Traut, aka the Pyright guy. One of the GOATs I get to work with daily.

English
181
93
2.1K
126.4K
Xbotter
Xbotter@Xbotter·
@maria_rcks The secret for a frontier AI company to keep strong user relationships? Just be friendly and engage directly with users.
English
0
0
1
47
maria
maria@maria_rcks·
people will see: - tibo reseting limits daily - sam being a real human and interacting with people (and getting some dunks on) - the codex team always talking to people who use codex - giving subs to open source projects & open sourcing their harness and ask "why are you guys so nice to openai"
English
24
13
334
11.4K
Xbotter
Xbotter@Xbotter·
@KexinHuang5 So cool! we're exploring similar implementations too.
English
0
0
0
13
Kexin Huang
Kexin Huang@KexinHuang5·
Introducing agent-managed sandboxes: AI agents to autonomously orchestrate fleets of sandboxes to handle massive workloads. This unlocks adaptive scaling, from small tasks to terabyte-scale processing, while minimizing unnecessary cost. With parallel sandboxes, throughput multiplies, and agents can explore multiple ideas simultaneously. Checkout our new technical report of this sandbox pattern:
Kexin Huang tweet media
Phylo@phylo_bio

x.com/i/article/2049…

English
6
9
103
10.5K
Nick Dobos
Nick Dobos@NickADobos·
Codex updates 👀 - new ralph loop via /goal - new /side chat, aka /btw mode
English
6
2
58
3.9K
Xbotter retweeté
OpenAI
OpenAI@OpenAI·
It's never been easier to do everyday work with Codex. Choose your role, connect the apps you use every day, and try suggested prompts. Codex helps with everything from research and planning to docs, slides, spreadsheets, and more.
English
217
244
3.4K
526K
Xbotter
Xbotter@Xbotter·
@thsottiaux Tibo, just open custom UI components. It would give codex endless possibilities.
English
3
0
28
3.3K
Tibo
Tibo@thsottiaux·
@Xbotter Fancy, I like it
English
4
0
92
17.1K
Tibo
Tibo@thsottiaux·
Send us feature requests for codex in the form of an images 2.0 generated image. It makes it easier for codex to implement if we decide to go for it. Saw some good ones today already that codex is cooking on.
English
577
45
2.2K
146K
Xbotter
Xbotter@Xbotter·
@spidernvdev Indeed, products only need imagination, but engineers have a ton of other stuff to handle.
English
1
0
1
56
Pruthviraj P
Pruthviraj P@spidernvdev·
most ai advice online is about prompts. but if you want to build real ai products, you also need to understand: latency memory cost evaluation deployment prompting helps. systems knowledge compounds.
English
1
0
3
175
Xbotter retweeté
Sherwin Wu
Sherwin Wu@sherwinwu·
One of my favorite of our recent blog posts: @cerebras made GPT‑5.3‑Codex‑Spark so fast that we had to rethink how Codex uses the Responses API – leading us to build WebSocket support for ultra fast latency. Cerebras speed is just 🤯
OpenAI Developers@OpenAIDevs

⚙️ We made agent loops faster with WebSockets in the Responses API As Codex got faster, the bottleneck moved from inference to inefficient API calls WebSockets keep response state warm across tool calls, helping workflows run up to 40% faster end to end openai.com/index/speeding…

English
10
22
319
26.5K
Xbotter
Xbotter@Xbotter·
@thsottiaux Anthropic: hide mythos OpenAI: catch goblin
English
0
0
0
57
Xbotter
Xbotter@Xbotter·
It was once thought that the GLM-5 model had inherent flaws, which prevented it from performing well on long-context tasks. Now it seems that infrastructure on the inference side is also a massive challenge. Thank you to Zhipu for their hard work and for sharing.
Z.ai@Zai_org

Scaling laws push model capability forward. But whether that capability becomes reliable in production depends on how we handle Scaling Pain. z.ai/blog/scaling-p… In our latest blog, we share how we debugged GLM-5 serving at scale: reproducing rare garbled outputs, repetition, and rare-character generation; tracing and eliminating KV Cache race conditions; fixing HiCache synchronization issues; and introducing LayerSplit for up to 132% throughput improvement. We hope these lessons help the community avoid similar pitfalls and build more robust inference infrastructure.

English
0
0
0
32
Xbotter
Xbotter@Xbotter·
Coding in codex. Writing in codex. Imaging in codex. Linear in codex. All things in codex. Wait, I need a meeting in codex.
English
0
0
0
16
Xbotter
Xbotter@Xbotter·
3/3:Future Predictions Implications ahead: • Large prompt templates will lose value (model absorbs them) • Agent systems shift: process control weakens; resource orchestration, permissions, tool ecosystem, and multi-agent collab become key • “Wrapping an agent loop” moat gets thinner • Prompt engineering evolves: less complex structures, more clear boundaries + success criteria Core transition: From teaching step-by-step → defining what the right outcome looks like.
English
0
0
0
17
Xbotter
Xbotter@Xbotter·
2/3:Underlying Trends Behind the changes: • Model’s planning, trade-off, and stopping ability is now strong — no need to micromanage • Prompt engineering shifts from “writing process” to “defining specs + acceptance criteria” • Cost focus moves from model price to “how long it thinks + how many times it retrieves” • RAG emphasis changes from recall to orchestration (when & how often to search) OpenAI’s message: Stop over-teaching the model how to do things.
English
1
0
0
17
Xbotter
Xbotter@Xbotter·
1/3:GPT-5.5 Prompt Changes GPT-5.5 prompt guidance marks a clear shift from 5.4: • Keep prompts short — stop breaking tasks into fine-grained steps • Move from process control to boundary control: define goals, constraints, success criteria, and must-use sources • Focus on reasoning cost & retrieval budget — avoid over-thinking • Add a short preamble for complex tasks Prompts are no longer “how-to scripts”, but “what must be right + what counts as done”.
Xbotter@Xbotter

x.com/i/article/2049…

English
1
0
0
50
Xbotter
Xbotter@Xbotter·
@LLMJunky I set up an automated daily task to scan past chats and spot skills to improve. It saves me a lot of time.
English
2
0
3
151
am.will
am.will@LLMJunky·
small tip: after you run a skill, it's not a terrible idea to ask the agent if there's anything in the skill itself that can be updated to make it more efficient. i'm always updating and refining mine to not only be faster, but use fewer tokens. primarily applies to complexity.
English
28
4
133
6.7K
Adam.GPT
Adam.GPT@TheRealAdamG·
developers.openai.com/api/docs/guide… **NEW: GPT-5.5 Prompting Guide** "GPT-5.5 works best when prompts define the outcome and leave room for the model to choose an efficient solution path. Compared with earlier models, you can often use shorter, more outcome-oriented prompts: describe what good looks like, what constraints matter, what evidence is available, and what the final answer should contain. Avoid carrying over every instruction from an older prompt stack. Legacy prompts often over-specify the process because earlier models needed more help staying on track. With GPT-5.5, that can add noise, narrow the model’s search space, or lead to overly mechanical answers. For more detail on GPT-5.5 behavior changes, start with the Using GPT-5.5 guide. This guide focuses on prompt changes that follow from those behavior changes. The patterns here are starting points. Adapt them to your product surface, tools, evals, and user experience goals."
English
49
247
2.4K
269.2K