6481

128 posts

6481

6481

@7845_f9

Katılım Şubat 2024
93 Takip Edilen12 Takipçiler
Andrew Curran
Andrew Curran@AndrewCurran_·
OpenAI has filed a court statement alleging that Elon Musk contacted Greg Brockman two days before the trial to gauge interest in a settlement, and when rebuffed Elon said 'By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be'.
Andrew Curran tweet media
English
65
40
523
74.4K
Andrew Curran
Andrew Curran@AndrewCurran_·
Elon's going to show up uninvited to this GPT-5.5 party like the witch in sleeping beauty and deliver a powerful curse.
English
35
34
1.4K
204.9K
Miles Brundage
Miles Brundage@Miles_Brundage·
If you are surprised by the GPT-5.5 being good at cyber thing, you have Big AI Lead Delusion. There are none (sidenote, I'm not 100% clear if this is GPT-5.5 or GPT-5.5 Cyber. Naming conventions are so chaotic + there is ~no info on the latter that it is hard to say)
English
8
1
78
126.3K
6481
6481@7845_f9·
@tszzl Aligned angel to goblin is big aura loss.
English
0
0
0
25
roon
roon@tszzl·
There is nothing more reviled than the Goblin
English
142
45
1.2K
199.5K
6481
6481@7845_f9·
@sama You need to promote that the $20 plan gives you both chatgpt and codex limits
English
0
0
0
32
Sam Altman
Sam Altman@sama·
codex with the $20 plan is a really good deal
English
1.5K
309
12K
934.8K
Sam Altman
Sam Altman@sama·
we love seeing our users win. we want to give you the best tools, lots of compute, and watch you do the magic.
English
1.4K
507
11.8K
651.9K
6481
6481@7845_f9·
@scaling01 Are these moves related to the recently joined salesforce people?
English
0
0
1
139
Lisan al Gaib
Lisan al Gaib@scaling01·
it's really the dumbest fucking thing I've seen from Anthropic you know how much I love them but this is borderline suicidal they could've just said: "here's Haiku and Sonnet 5 and btw Pro subs no longer get access to Opus and only low thinking effort" but removing claude code entirely is such an idiotic move when everything you are known for is coding especially in the same week we likely get Spud/GPT-5.5 and potentially DeepSeek-V4 they are begging you poor shits to unsubscribe and to either pay up or to get lost so that they can allocate that juicy compute to higher-margin customers
Lisan al Gaib@scaling01

Anthropic removed Claude Code from the Pro plan I'm obviously going to cancel my subscription if I lose access to Claude Code Mythos was actually the top of the Anthropic hype cycle

English
96
60
2K
427.3K
OpenAI Developers
OpenAI Developers@OpenAIDevs·
Last week, we released a preview of memories in Codex. Today, we’re expanding the experiment with Chronicle, which improves memories using recent screen context. Now, Codex can help with what you’ve been working on without you restating context.
English
224
367
4.5K
1.2M
NomoreID
NomoreID@Hangsiin·
I’m glad a feature I’d been waiting for for a long time has finally launched, but it’s disappointing that I can’t use it because I’m on Windows. An OpenAI employee said in a tweet that the computer use feature will be coming to Windows soon, so I guess I’ll have to wait a bit longer. That said, does it actually work well? Or is it still just an interesting toy? Once again, computer use has been released, but I’m seeing the same pattern where there are no noticeable real-world user reviews or demos showing up in my feed.
NomoreID@Hangsiin

@SQMah It appears that computer use performance has been enhanced in GPT-5.3-Codex. Are there plans for native integration with Codex?

English
2
0
15
1.5K
6481
6481@7845_f9·
@dieaud91 I'm not good at math but on forum I read, there was review that Pro model's thinking time reduced 30 -> 10 min but performance is better
English
0
0
1
338
Diego Aud
Diego Aud@dieaud91·
10th time in a row that GPT-5.4 Pro Extended thinks for just 1 to 5 minutes instead of the typical 15-30 minutes. Same kind of prompt. ​Right before this response, I got an "evaluate this answer" popup, but it disappeared due to an error. I strongly suspect they're testing a new model or tweaking something under the hood. ​Has anyone else noticed this drop in thinking time?
Diego Aud tweet media
English
13
0
111
24.5K
6481
6481@7845_f9·
@bcherny @DurhamVSmith Opus 4.7 is a research preview so you have to request for access and make a new aws account, as my understanding
English
0
0
0
73
Boris Cherny
Boris Cherny@bcherny·
Opus 4.7 uses more thinking tokens, so we've increased rate limits for all subscribers to make up for it. Enjoy!
English
1.2K
936
22.2K
1.3M
6481
6481@7845_f9·
@Yuchenj_UW Model untouched but I think they touched thinking juice (when using claude,ai)
English
0
0
0
25
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Seeing rumors that Claude Opus 4.6 got nerfed. Usually this boils down to 3 cases: - Unintentional. For example, a regression caused by changes in the inference stack or Claude Code. This is what evals are for before rolling out. - Intentional “optimizations” (quantization, reduced reasoning). If so, say it. If users pay for a model, they should get that model. - User psychology. The more you use a model, the dumber it feels.
English
136
22
681
134.7K
6481
6481@7845_f9·
@rezoundous 5x: Matches old pro plan limits during the 2x event 20x: ~1.8x old pro plan base (~3.6x with event if applicable) Plus: ~60% of the old 2x event limit
English
0
0
0
67
Tyler
Tyler@rezoundous·
Is it just me, or has Codex usage limits been nerfed?
English
218
26
812
54.5K
6481
6481@7845_f9·
@trq212 @apeatling @steipete 5.3 codex also got messy with false positives before and fixed it fast, feels similar to that. Honestly i don't think anthropic did something bad on purpose.
English
0
0
0
105
Thariq
Thariq@trq212·
@apeatling @steipete I know, it's a very difficult problem but need to do better we do get a lot of abuse and freeing that capacity up for our customers is very valuable
English
10
0
89
7.8K
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models.
Peter Steinberger 🦞 tweet media
English
541
248
5.5K
1.4M
Tibo
Tibo@thsottiaux·
I realize yesterday’s Codex reset came in a bit at an unfortunate time given the last one was almost perfectly a week ago. To really celebrate the 3M I’ll reset again tomorrow. Thanks for the feedback!
English
642
298
6.6K
560.2K
Tibo
Tibo@thsottiaux·
Does anyone have a breakdown of how much value you get in your various AI subscriptions from different providers? When compared to API prices
English
184
15
949
120.7K