Jeff Harris

1.4K posts

Jeff Harris banner
Jeff Harris

Jeff Harris

@jeffintime

the malleability of minds. Codex @openai

Oakland Katılım Ocak 2008
1.1K Takip Edilen3.9K Takipçiler
Jeff Harris retweetledi
Boaz Barak
Boaz Barak@boazbaraktcs·
I think preserving models for internal deployment is risky. I encourage Anthropic to release Mythos, even if it’s a version that over refuses on cyber tasks or routes risky responses to a weaker model, as we did with codex.
English
36
27
590
66K
Jeff Harris retweetledi
Tibo
Tibo@thsottiaux·
Three million people are now using Codex weekly - up from two million a little under a month ago. Incredible to see the growth. Thank you to all of you and to the ecosystem we’re part of. To celebrate, we’re resetting rate limits so you can keep building, and we’ll reset them every additional 1M users until we reach 10M, so we can keep celebrating along the way. Enjoy and thank you!
English
401
297
4.5K
496.4K
Jeff Harris
Jeff Harris@jeffintime·
This is one of the biggest unsolved product problems in AI Users know urgency. Platforms know when capacity is scarce and abundant. Building products that can turn those two signals into coordinated behavior is going to matter a ton
Tibo@thsottiaux

With Codex the there is quite the gulf in load between peak and off-peak times, and we would like to achieve more of a smoother traffic pattern as that would be a more optimal use of our compute. We have ideas, but curious what you all think we should do? Would more usage during off-peak and surge multiplier during peak times make sense?

English
0
0
2
319
dominik kundel
dominik kundel@dkundel·
Sometimes we ship features so fast we forget to share them 😅 Codex in the app can now read the integrated terminal! Thanks @ajambrosino!
dominik kundel tweet media
Leon@LeonKuzmin_

@dkundel @OpenAIDevs @raycast I’m constantly copying command output from the app command window. Shouldn’t codex be aware of the command output?

English
41
11
504
60.6K
Jeff Harris retweetledi
Rohan Varma
Rohan Varma@rohanvarma·
If you want AI Code Review, but don't want to pay $25 per review (not a typo), check out Codex Review! It leverages frontier Codex models, finds complex issues, and 100% usage based. Most runs should cost ~$1 or less developers.openai.com/codex/integrat…
English
83
69
1.4K
186.3K
Piquo
Piquo@piquopiquo·
@jeffintime don’t even think you would need to speed the token/sec even further at this point. Lightning fast compared to the rest of the market. Do you have suggestions how to handle huge codebases with the current context windows? MCPs perhaps? Thank you!
English
1
0
1
8
Jeff Harris
Jeff Harris@jeffintime·
Spark’s the fastest model we’ve ever made Now rolling out to the heaviest Codex users on Plus
Jeff Harris tweet media
English
30
9
448
31.8K
Maniac Selinsky
Maniac Selinsky@ManiacSelinsky·
@jeffintime what is going on with your cybersecurity checks. Massive amounts of false positives being reported.
English
1
0
1
420
Jeff Harris
Jeff Harris@jeffintime·
@MoonGotArt yep 128K. we'll keep pushing on context in future fast models
English
1
0
2
79
Jeff Harris
Jeff Harris@jeffintime·
@colinsolvely for this round we look at established tenure on Codex (at least 1mo) and then usage over the last week we'll keep adjusting the qualifying threshold as capacity permits
English
1
0
2
144
Colin
Colin@colinsolvely·
@jeffintime What’s the definition of a heavy user?
English
1
0
0
700
Jeff Harris
Jeff Harris@jeffintime·
@shailesz Huh shouldn't be the case. DM me the email address for your pro account?
English
0
0
1
128
Jeff Harris
Jeff Harris@jeffintime·
@piquopiquo helpful to know. we'll keep pushing on context in future fast models
English
1
0
0
76
Piquo
Piquo@piquopiquo·
@jeffintime really love it, the context window atm is sadly my biggest bottleneck with it!
English
1
0
0
416
Leo
Leo@leodev·
@jeffintime Can you increase the limits for it for Pro users. It runs out so quickly... especially in the 5 hour window
English
1
0
1
301
Jeff Harris
Jeff Harris@jeffintime·
@tina_jjjj it's same, so unfortunately no: 5.3-spark is text only but stay tuned for future fast models 👁️
English
0
0
0
123
JT
JT@tina_jjjj·
@jeffintime can images be uploaded on this ? or is this different from codex-spark?
English
1
0
0
371
Jeff Harris retweetledi
Peter Bakkum
Peter Bakkum@pbbakkum·
gpt-realtime-1.5 is the best native audio model on the Scale AudioMultiChallenge benchmark -- this is a significant jump in capability by this measure. There are models that outperform it but they are reasoning models without native audio output.
Peter Bakkum tweet media
English
13
15
177
25.2K
Jeff Harris retweetledi
adi
adi@adonis_singh·
uhhh WTF?! gpt-5.3-codex gets 86% on IBench, beating out all other models massively. I was NOT expecting this
adi tweet media
English
93
69
1K
232.5K
Jeff Harris
Jeff Harris@jeffintime·
it’s early innings for optimizing LLM latency in long-running harnesses
Cline@cline

We tested @OpenAI's new WebSocket connection mode for the Responses API into Cline and the early numbers are wild. Instead of resending full context every turn, WebSocket mode keeps a persistent connection, sends only incremental inputs. With 5.2 Codex results vs the standard API: → ~15% faster on simple tasks → ~39% faster on complex multi-file workflows → Best cases hitting 50% faster WebSocket handshake adds slight TTFT overhead on short tasks, but it gets amortized fast. On heavier workloads with dozens of tool calls, the speed gains are massive. Still expanding our test sample, but this is a very promising step forward for every Cline user. Faster AI coding is coming.

English
3
0
7
1.4K