Oliver🏴‍☠️

1.6K posts

Oliver🏴‍☠️ banner
Oliver🏴‍☠️

Oliver🏴‍☠️

@CodeWithOllie

Indie founder 🇯🇲 | Fitness | Stoicism

Kingston, Jamaica Inscrit le Kasım 2022
407 Abonnements193 Abonnés
Tweet épinglé
Oliver🏴‍☠️
Oliver🏴‍☠️@CodeWithOllie·
My hardship is the adventure I’ve been waiting for. Every struggle, a chapter. Every setback, a scene. This is where the story gets good.
English
1
0
12
1.5K
BURKOV
BURKOV@burkov·
GPT-5.4 > Opus 4.6 And Google still doesn't have anything even remotely competitive.
English
139
25
869
122.3K
Paul Solt
Paul Solt@PaulSolt·
Have you tried Gemini for code review?
English
8
0
5
2.1K
Oliver🏴‍☠️
Oliver🏴‍☠️@CodeWithOllie·
@PaulSolt GPT... but man... I miss a good 4.6 level of design... for my business use cases... and writing...
English
0
0
1
97
Paul Solt
Paul Solt@PaulSolt·
GPT-5.4 vs. Opus 4.6 Which is writing most of your code?
English
129
0
104
21.2K
Oliver🏴‍☠️
Oliver🏴‍☠️@CodeWithOllie·
@LLMJunky It’s still in beta anyways, I don’t know why benchmarks use it and not explicitly say this.
English
1
0
1
49
am.will
am.will@LLMJunky·
While I haven't tried GPT's 1 million context window, I fully agree with this. There's really two things here: 1) OpenAI's compaction endpoint is absolutely incredible. Unless you're reading some very large files that simply cannot reasonably fit into the normal 256k-400k context window, I see no meaningful benefit. I have been letting Codex run to context window limits repeatedly for months and I never notice any degradation in intelligence. No idea how they do it. 2) As Sero said, subagents. I basically orchestrate with swarms universally now. When you have subagents working through a well-developed spec sheet, you are rarely even compacting anyway. And even when you do, see point 1. Furthermore, I prefer to write my specs to a markdown file for my subagents to work through, add logs, etc, so the orchestration agent knows exactly what is going on at any given time regardless. I think its great that there's a 1M context window, and I hope that performance improves while using it, but I *basically* don't care at all about it right now. I will say though, using Opus 4.6, the 1M context window is absolutely fantastic because unlike GPT models, I don't like to let Opus compact at all. I do find that it reduces performance, and so having the extra headroom completely changes the UX using Claude. Also it works very well. So all of this is to drive the point home that while large context windows are desired, and I am excited about getting them, it is not the most important factor to consider.
0xSero@0xSero

Some things I learned this week: 1. GPT-5.4/Codex at more than 256k max tokens doesn't help and is too expensive (that's why I ran out of usage btw). The models still don't do great past 200k context and I can get basically infinite context using subagents anyway. 2. To be fair to OpenAI I did spam the hell out of it over 3 devices working in 4-10 sessions 24/7 especially with Autoresearch. This is very generous, and I love the app a lot.

English
4
0
25
3.2K
Vasile Brindusa Antonia
Vasile Brindusa Antonia@AntoniaBVasile·
@CodeWithOllie @OpenAIDevs I use chatGPT for strategy- google Ads Agency+ creativity with gemini+ some with claude now+some with grok( as agency we also serve an adult store ecommerce so any ai except grok is shy)
English
1
0
0
70
OpenAI Developers
OpenAI Developers@OpenAIDevs·
We’re introducing GPT-5.4 mini and nano, our most capable small models yet. GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents. For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest version of GPT-5.4. openai.com/index/introduc…
OpenAI Developers tweet media
English
315
626
6.5K
752.2K
Robin Ebers | AI Coach for Founders
can anyone help me with this? has anyone built a skill that lets Codex invoke headless Claude Code for copywriting and design changes? else I might built one to fill Codex gaps myself
English
4
1
6
1.1K
Sayan
Sayan@thesayannayak·
Google uses Python. Netflix uses Python. Instagram was built with Python. Spotify uses Python. NASA uses Python. Amazon uses Python. Reddit uses Python. What’s stopping you from learning Python ? 🐍
English
77
11
162
7.7K
Mouad
Mouad@nadzi_mouad·
Codex model selector in 2026 be like 😂 they promised they’ll fix it… but you still have to: Scroll through GPT-5.4 GPT-5.4-Mini GPT-5.3-Codex GPT-5.2-Codex GPT-5.2 GPT-5.1-Codex-Max GPT-5.1-Codex-Mini …and counting THEN pick your reasoning level Low / Medium / High / Extra High The list is only getting longer and longer
Mouad tweet media
English
2
0
14
566
Joseph Noel Walker
Joseph Noel Walker@JosephNWalker·
Obsidian x Claude Code has been a game changer for me. I’m maybe 20-30% more productive. Is anyone else finding this?
English
72
11
441
71.9K
Oliver🏴‍☠️
Oliver🏴‍☠️@CodeWithOllie·
@rohandotnagpal @DanielP1973235 @OpenAIDevs Someone gets it. And this transfers to other area, like I need to make presentations. ChatGPT just can’t do it. Even in how it writes it just still feels too “quanty” and not business like. Maybe I can fix the business writing with a prompt?
English
0
0
0
44
rohan
rohan@rohandotnagpal·
@DanielP1973235 @CodeWithOllie @OpenAIDevs ChatGPT for excel is quite solid. But yeah OpenAI has some way to go in nailing down design. The wireframes the models generate are ass and they’re even worse at following wireframes.
English
1
0
2
79
am.will
am.will@LLMJunky·
damn openai really fell off. no usage resets in over six days. just when you think you know someone nothing stings like betrayal maybe tomorrow at 11
am.will tweet media
English
88
7
535
40.4K
Oliver🏴‍☠️
Oliver🏴‍☠️@CodeWithOllie·
@thsottiaux Is there a way for you to have simple tasks delegated by gpt4 standard to mini? Like exploring code bases and stuff. I know, silly me, I can just write my own agent for that and have my main agent always sue it to expire.
English
0
0
0
175
Tibo
Tibo@thsottiaux·
Yesterday we launched subagents in Codex. Today we released GPT-5.4-Mini, which is SoTA in its category. Coincidence or genius move?
English
153
36
1.6K
79.5K