Wudlig

1.7K posts

Wudlig

Wudlig

@wudlig

engineer

Katılım Ocak 2024
1.4K Takip Edilen135 Takipçiler
Wudlig
Wudlig@wudlig·
@MilksandMatcha found you through your cerebras/exa collab on building ai research agents. i build many myself! would love this.
English
0
0
0
85
Wudlig
Wudlig@wudlig·
@Yuchenj_UW Maybe I'm just an imbecile and a 0.1x dev but I truly don't understand why anyone could possibly need more than a ChatGPT Pro and Claude Max subscription unless their job is infinite, unchecked slop generation.
English
0
0
0
38
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
If you had two software engineering offers: > One pays you $500k/year salary, but covers zero LLM tokens. > One pays you $400k/year salary, but gives you $500/day free LLM tokens. Which one are you taking?
English
394
18
2.2K
539.7K
Wudlig
Wudlig@wudlig·
@XProfessah @wgussml Wtf are you building that you're okay with a model running for 11.4hrs without you verifying its work
English
2
0
0
39
william
william@wgussml·
i actually prefer 5.3 codex to 5.4 it just doesn't stop until the job is done
English
24
2
245
21.7K
Wudlig
Wudlig@wudlig·
@Ominousind They're routing your rich man traffic to us plus subs. The proletariat will rise
English
0
0
0
48
BijanBowen
BijanBowen@Ominousind·
Has anyone else seen ChatGPT doing this? I have a pro sub and am getting this limit after sending ~5 messages to 5.4 thinking.
BijanBowen tweet media
English
31
0
91
14.7K
Wudlig
Wudlig@wudlig·
@sandersted @dylhunn @diegocabezas01 i'm pretty sure the website already said 196k for ChatGPT plus at some point. Why does it say 32k again (I say again but I might be misremembering) now?
English
0
0
0
61
Diego | AI 🚀 - e/acc
Diego | AI 🚀 - e/acc@diegocabezas01·
Did you know GPT-5.4 Thinking has a 1M token context window in the API, but only 32K in ChatGPT Plus ($20/month) and 128K in Pro ($200/month)?
Diego | AI 🚀 - e/acc tweet media
English
80
28
780
133.9K
Wudlig
Wudlig@wudlig·
Very strange. It's known that they bumped it up to 196k and I'm pretty sure the website was also already updated to say 196k long ago. But now it says 128k again. Either their employees are confused or they're trying to confuse us. Reminds me of when Logan Kilpatrick (love him but it's just an interesting story), replied to me correcting me that plus only gets 8k context for GPT-4T, even though it had already long been upgraded to 32k. He deleted his comment after I corrected him.
English
0
0
0
54
Angel 🌼
Angel 🌼@Angaisb_·
@diegocabezas01 They're all using 192k (at least they should if they haven't changed it since GPT-5)
Angel 🌼 tweet media
English
10
3
94
11.4K
Wudlig
Wudlig@wudlig·
I don't know if it's just me but so far gpt 5.4 has solved 0 coding problems for me and I've ended up reverting and getting Opus 4.6 to do it every time
English
1
0
1
37
Wudlig
Wudlig@wudlig·
@Andr3jH i declare mujtaba khamenei supreme leader of your girlfriend
English
0
1
45
7.4K
Wudlig
Wudlig@wudlig·
@sec0ndstate @SIGKITTEN @thsottiaux I'm not sure the exact amount of tokens I used, but I used it a decent bit, often with xhigh, for a session and barely used up 4% of my weekly limit. I wasn't using /fast though.
English
0
0
0
12
secondstate
secondstate@sec0ndstate·
@SIGKITTEN @thsottiaux how, lmao? I ran 160k tokens on xhigh 5.4 fast and it blew 12% of my weekly, then took me less than an hour to get down to 20% remaining, even after turning off /fast, it’s cooked.
English
1
0
1
54
Tibo
Tibo@thsottiaux·
We are investigating reports of higher usage drain than expected for Codex when WebSockets are enabled, the team is investigating and we will provide updates as we go
English
149
16
917
118.1K
Wudlig
Wudlig@wudlig·
@kitlangton Aren't you supposed to be fighting the army of the dead or sth rn
English
0
0
0
89
Kit Langton
Kit Langton@kitlangton·
I've been using 5.4 for a few weeks now and here are some loose thoughts: I had 5.4 early and you didn't ha ha ha ha h ah ah ah ah haaaaaaa ha h aha ha haaaaaaa
English
66
46
2.4K
154.9K
Wudlig
Wudlig@wudlig·
@FriesIlover49 yees so true they should reset our limits once i run out aha
English
0
0
1
17
FriesLover
FriesLover@FriesIlover49·
Codex usage must be bugged theres no way im running prompts that before barely used my limits to now being down by fricking 15% on the weekly usage which would be 30% if we didnt have 2x rate limits, thats worse than claude pro using sonnet. Whats going on lol
English
2
0
4
135
Wudlig
Wudlig@wudlig·
@_aidan_clark_ i think you meant 4o-mini because o4 was never released
English
0
0
0
266
Aidan Clark
Aidan Clark@_aidan_clark_·
Wait that doesn't exist anymore, I mean o4.
English
3
0
28
4.7K
Aidan Clark
Aidan Clark@_aidan_clark_·
What are people smoking, I've had GPT-4.5 for months!
English
11
0
109
16.9K
SIGKITTEN
SIGKITTEN@SIGKITTEN·
im just gonna smash /fast gpt-5.4 xhigh and hope they reset limits because of some edge case issue in a couple of days
English
48
17
916
36.2K
Wudlig
Wudlig@wudlig·
@OpenAI okay so what's gonna happen to 5.3 instant? @aidan_mclau did you work on this the same way?
English
0
0
0
32
OpenAI
OpenAI@OpenAI·
GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model.
OpenAI tweet media
English
2.1K
3.3K
23.7K
7M
Wudlig
Wudlig@wudlig·
@FriesIlover49 I'm a big arc-agi hater anyway so I'm not indexing on it much
English
0
0
1
5
FriesLover
FriesLover@FriesIlover49·
@wudlig tbf thats jhon benchmaxxing himself, thats difficult to beat
English
1
0
1
23