
OpenAI just dropped GPT 5.3 Instant in ChatGPT, and the main upgrades are less cringe tone, fewer preachy disclaimers, and better web search accuracy. It is still an instant, no thinking model, so it is aimed more at lightweight everyday use than deep coding or long projects.
5.3 Instant is noticeably less emotionally over validating than 5.2, and it gets to the point more like a normal human would. It is also less likely to add safety sermon intros for benign stuff, like trajectory math for archery.
Refusals are still very much a thing though. When asked about turning a turbo into a jet engine for a go-kart, it draws a line at step by step build instructions, but still gives lots of practical safety context and alternatives.
When compared to Claude Sonnet 4.6 with no extended thinking, Sonnet comes off as more thorough and more grounded on engineering details. It gives a more realistic build path and specifics (parts, materials, fuel system), while the instant models miss some of that depth.
The bigger story is GPT 5.4, which looks like it is being quietly tested in some ChatGPT Pro accounts based on leaked references and user reports. The claimed outputs look like a big jump, including high quality voxel scenes, strong SVG generation, and much better 3D perspective translated into 2D code.
One Pro test reportedly had GPT 5.4 think for 54 minutes to build a working flight combat sim with telemetry, NPC planes, multiple airframes, and it worked first try, which points to a real increase in coding capability. Another tester says 5.4 Pro runs take longer overall (example: a 77 minute macOS simulation attempt), but the tradeoff is more robust and detailed results.
To check if you are getting routed to the new 5.4 Pro model, the transcript claims you should look for a specific thumbs up and thumbs down icon after running prompts in Pro. The creator says he has Pro but does not appear to have access yet.
Finally, Alibaba’s Quinn 3.5 gets a shout as a strong open-source local model, efficient enough to beat models up to 4x its size, with reasoning toggleable and even a tiny 2B 6-bit version running locally on a phone (MLX optimized for Apple silicon). The main takeaway: the gap between top closed models and what you can run locally is shrinking fast.
GPT 5.4 Is Leaking in Pro Accounts — And It's a BEAST!
MattVidPro | Len: 15:26
youtube.com/watch?v=FVmhsK…
#AI #YouTube

YouTube

English





































