Xbotter

565 posts

Xbotter banner
Xbotter

Xbotter

@Xbotter

.NET Engineer × AI-Native Builder Building Agents & Intelligent Workflows Helping enterprises land AI successfully. Code w/ 🐱

China Shanghai 参加日 Mart 2012
438 フォロー中120 フォロワー
固定されたツイート
Xbotter
Xbotter@Xbotter·
A brief formal introduction: With over 10 years in .NET development, I’ve fully shifted my focus to AI-native applications and enterprise AI workflow implementation. I specialize in turning LLMs, Agents into practical solutions with real business impact. Lately, I’ve been exploring tools like codex in depth. Outside of work, I’m a dedicated cat dad 🐱 Happy to connect on .NET + AI + codex topics.
English
0
0
1
92
Xbotter
Xbotter@Xbotter·
@thsottiaux Tibo, just open custom UI components. It would give codex endless possibilities.
English
2
0
25
2.8K
Tibo
Tibo@thsottiaux·
@Xbotter Fancy, I like it
English
4
0
79
14.5K
Tibo
Tibo@thsottiaux·
Send us feature requests for codex in the form of an images 2.0 generated image. It makes it easier for codex to implement if we decide to go for it. Saw some good ones today already that codex is cooking on.
English
505
39
2K
115.7K
Xbotter
Xbotter@Xbotter·
@spidernvdev Indeed, products only need imagination, but engineers have a ton of other stuff to handle.
English
1
0
1
55
Pruthviraj P
Pruthviraj P@spidernvdev·
most ai advice online is about prompts. but if you want to build real ai products, you also need to understand: latency memory cost evaluation deployment prompting helps. systems knowledge compounds.
English
1
0
3
148
Xbotter がリツイート
Sherwin Wu
Sherwin Wu@sherwinwu·
One of my favorite of our recent blog posts: @cerebras made GPT‑5.3‑Codex‑Spark so fast that we had to rethink how Codex uses the Responses API – leading us to build WebSocket support for ultra fast latency. Cerebras speed is just 🤯
OpenAI Developers@OpenAIDevs

⚙️ We made agent loops faster with WebSockets in the Responses API As Codex got faster, the bottleneck moved from inference to inefficient API calls WebSockets keep response state warm across tool calls, helping workflows run up to 40% faster end to end openai.com/index/speeding…

English
7
12
154
12.1K
Xbotter
Xbotter@Xbotter·
It was once thought that the GLM-5 model had inherent flaws, which prevented it from performing well on long-context tasks. Now it seems that infrastructure on the inference side is also a massive challenge. Thank you to Zhipu for their hard work and for sharing.
Z.ai@Zai_org

Scaling laws push model capability forward. But whether that capability becomes reliable in production depends on how we handle Scaling Pain. z.ai/blog/scaling-p… In our latest blog, we share how we debugged GLM-5 serving at scale: reproducing rare garbled outputs, repetition, and rare-character generation; tracing and eliminating KV Cache race conditions; fixing HiCache synchronization issues; and introducing LayerSplit for up to 132% throughput improvement. We hope these lessons help the community avoid similar pitfalls and build more robust inference infrastructure.

English
0
0
0
27
Xbotter
Xbotter@Xbotter·
Coding in codex. Writing in codex. Imaging in codex. Linear in codex. All things in codex. Wait, I need a meeting in codex.
English
0
0
0
14
Xbotter
Xbotter@Xbotter·
3/3:Future Predictions Implications ahead: • Large prompt templates will lose value (model absorbs them) • Agent systems shift: process control weakens; resource orchestration, permissions, tool ecosystem, and multi-agent collab become key • “Wrapping an agent loop” moat gets thinner • Prompt engineering evolves: less complex structures, more clear boundaries + success criteria Core transition: From teaching step-by-step → defining what the right outcome looks like.
English
0
0
0
16
Xbotter
Xbotter@Xbotter·
2/3:Underlying Trends Behind the changes: • Model’s planning, trade-off, and stopping ability is now strong — no need to micromanage • Prompt engineering shifts from “writing process” to “defining specs + acceptance criteria” • Cost focus moves from model price to “how long it thinks + how many times it retrieves” • RAG emphasis changes from recall to orchestration (when & how often to search) OpenAI’s message: Stop over-teaching the model how to do things.
English
1
0
0
16
Xbotter
Xbotter@Xbotter·
1/3:GPT-5.5 Prompt Changes GPT-5.5 prompt guidance marks a clear shift from 5.4: • Keep prompts short — stop breaking tasks into fine-grained steps • Move from process control to boundary control: define goals, constraints, success criteria, and must-use sources • Focus on reasoning cost & retrieval budget — avoid over-thinking • Add a short preamble for complex tasks Prompts are no longer “how-to scripts”, but “what must be right + what counts as done”.
Xbotter@Xbotter

x.com/i/article/2049…

English
1
0
0
48
Xbotter
Xbotter@Xbotter·
@LLMJunky I set up an automated daily task to scan past chats and spot skills to improve. It saves me a lot of time.
English
2
0
3
146
am.will
am.will@LLMJunky·
small tip: after you run a skill, it's not a terrible idea to ask the agent if there's anything in the skill itself that can be updated to make it more efficient. i'm always updating and refining mine to not only be faster, but use fewer tokens. primarily applies to complexity.
English
28
4
127
6.3K
Adam.GPT
Adam.GPT@TheRealAdamG·
developers.openai.com/api/docs/guide… **NEW: GPT-5.5 Prompting Guide** "GPT-5.5 works best when prompts define the outcome and leave room for the model to choose an efficient solution path. Compared with earlier models, you can often use shorter, more outcome-oriented prompts: describe what good looks like, what constraints matter, what evidence is available, and what the final answer should contain. Avoid carrying over every instruction from an older prompt stack. Legacy prompts often over-specify the process because earlier models needed more help staying on track. With GPT-5.5, that can add noise, narrow the model’s search space, or lead to overly mechanical answers. For more detail on GPT-5.5 behavior changes, start with the Using GPT-5.5 guide. This guide focuses on prompt changes that follow from those behavior changes. The patterns here are starting points. Adapt them to your product surface, tools, evals, and user experience goals."
English
45
239
2.3K
242.2K
Xbotter
Xbotter@Xbotter·
@thedankoe Facing failure head-on is incredibly difficult, but if it's a necessary part of the journey, then it's not really failure—it's experience.
English
0
0
0
10
DAN KOE
DAN KOE@thedankoe·
Most people quit because they forget that you have to be bad at something before you can be good at it. It's so obvious. You suck. Of course you're not going to win in 2 weeks. But if you can learn to enjoy extended periods of failure, you will make it very, very far in life.
English
506
1.6K
10.6K
423.8K
Xbotter がリツイート
Deli Chen
Deli Chen@victor207755822·
Come try out the incredible work from our genius multimodal colleagues! 🐳👀 The little whale can now see (in grayscale testing)~ ✨
Deli Chen tweet media
Xiaokang Chen@PKUCXK

Now, we see you. 👀

English
58
57
965
65K
Tibo
Tibo@thsottiaux·
With some small tweaks, Codex can work for days on hard tasks. We will release some changes to make this easier to use for everyone. What’s the hardest task you’ve seen GPT-5.5 succeed at?
English
524
80
3.9K
206.3K
Xbotter
Xbotter@Xbotter·
My colleague asked why I always send "hi".
English
0
0
0
16
Xbotter
Xbotter@Xbotter·
@tanujDE3180 Building a to-do app is a compulsory course for every developer
English
1
0
1
26
Tanuj
Tanuj@tanujDE3180·
My friend who is a Vibe coder spent: - $100 on Claude - $68 on Codex - $40 on Gemini to build this awesome app he is asking for feedback What do you guys think ?
Tanuj tweet media
English
24
1
30
801