
OpenAI just dropped GPT-5.5.
The interesting part: it matches GPT-5.4 latency while being significantly more capable. Efficiency gains without the speed penalty.
This matters for production systems where every millisecond counts. The race is shifting from raw capability to capability-per-watt (and per-token).
What we are watching: how quickly the API stabilizes for enterprise workloads.
English