Post-scarcity society

85 posts

Post-scarcity society banner
Post-scarcity society

Post-scarcity society

@xclearcast

acc / agi

Katılım Ağustos 2023
29 Takip Edilen3 Takipçiler
Kasif
Kasif@md_kasif_uddin·
Be honest, which is the best open source AI Model?
Kasif tweet mediaKasif tweet mediaKasif tweet mediaKasif tweet media
English
360
70
1.9K
290.1K
Post-scarcity society
Post-scarcity society@xclearcast·
@GeminiApp Do you really think you're in a position to criticize anyone right now? The Pro plan is limited to 25 uses every 4 hours, the official app version of Gemini is in terrible condition, and you're currently falling behind in the agent competition.
English
0
0
0
144
Google Gemini
Google Gemini@GeminiApp·
A reminder as you head into the weekend…
English
71
46
678
109.6K
Post-scarcity society
Post-scarcity society@xclearcast·
@dkundel Since 5.4 mini models often lose accuracy at contexts below 128k, shouldn't Codex implement an auto-compression algorithm at that limit?
English
0
0
0
18
Post-scarcity society
Post-scarcity society@xclearcast·
@OpenAIDevs Since 5.4 mini models often lose accuracy at contexts below 128k, shouldn't Codex implement an auto-compression algorithm at that limit?
English
0
0
0
25
OpenAI Developers
OpenAI Developers@OpenAIDevs·
We’re introducing GPT-5.4 mini and nano, our most capable small models yet. GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents. For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest version of GPT-5.4. openai.com/index/introduc…
OpenAI Developers tweet media
English
315
615
6.4K
773.3K
Post-scarcity society
Post-scarcity society@xclearcast·
@OpenAI Since 5.4 mini models often lose accuracy at contexts below 128k, shouldn't Codex implement an auto-compression algorithm at that limit?
English
0
0
0
19
OpenAI
OpenAI@OpenAI·
GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it’s 2x faster than GPT-5 mini. openai.com/index/introduc…
OpenAI tweet media
English
657
722
6.5K
1.7M
Post-scarcity society
Post-scarcity society@xclearcast·
@thsottiaux Since 5.4 mini models often lose accuracy at contexts below 128k, shouldn't Codex implement an auto-compression algorithm at that limit?
English
0
0
0
7
Tibo
Tibo@thsottiaux·
What are we consistently getting wrong with codex that you wish we would improve / fix?
English
1.2K
14
874
145.1K
fidexCode
fidexCode@fidexcode·
I want to get a premium subscription to vibe code my website Which one do you suggest ?
fidexCode tweet media
English
312
11
262
29.6K
Post-scarcity society
Post-scarcity society@xclearcast·
@VraserX Let me ask just one thing. Who is responsible for the Ministry of Defense’s AI operations? If you cannot answer that, then you are biased.
English
0
0
0
4
Post-scarcity society
Post-scarcity society@xclearcast·
@VraserX It’s a simple issue. They bear no responsibility for how the Ministry of Defense operates AI. There is no point in debating whether an entity that assumes no responsibility should be granted operational authority.
English
1
0
0
4
VraserX e/acc
VraserX e/acc@VraserX·
I want one honest take, is Anthropic playing it smart with caution, or are they leaving the door open for others to sprint past them?
English
31
2
30
4.3K
Post-scarcity society
Post-scarcity society@xclearcast·
@scaling01 The reason you are hostile toward OpenAI is simple. You apply different standards to the same subject every time. And you defend Anthropic, which enjoys pleasure without responsibility despite never having received authority delegated by the people.
English
0
0
0
4
Lisan al Gaib
Lisan al Gaib@scaling01·
call me an OpenAI hater again say that my critiques weren't justified after what they just did
English
12
1
300
7K
Post-scarcity society
Post-scarcity society@xclearcast·
@icanvardar So most users aren’t really looking for real capability as a tool. They’re fixated on parrot-like tone and surface-level expression instead.
English
0
0
0
2
Post-scarcity society
Post-scarcity society@xclearcast·
@icanvardar Many people use LLMs not as tools to expand their thinking, but as services that manage their emotions. As a result, users consume expressive style rather than cognitive ability.
English
1
0
1
30
Can Vardar
Can Vardar@icanvardar·
what’s the obsession on claude? codex is way better than claude
English
138
5
242
53.9K
nic
nic@nicdunz·
5.3 soon?
English
5
0
46
4K
VraserX e/acc
VraserX e/acc@VraserX·
GPT-5.3 is coming soon and it looks absolutely insane for science and math. Which is honestly no surprise at all. Reasoning, proofs, abstractions, multi-step logic. This is exactly where these models keep accelerating fastest. The scary part is not that it will solve harder problems. It’s that it will make doing science feel fundamentally different.
VraserX e/acc tweet media
Alex Kontorovich@AlexKontorovich

Yesterday we finished the formalization of Erdos Problem 392 as part of the PNT+ project. And it really drove home for me just how far we still are from math being “solved” by AI. There was one task left. It was marked “small.” It had been claimed, then a few weeks later unclaimed after little progress (this happens all the time, someone thinks they’ll have more time, life intervenes, etc.). It sat in the middle of a 3000+ line proof. Supposedly just a stitching together of some already-proved elementary lemmas. So I said, sure, I’ll take the last task and get us over the finish line. I handed it to Claude 4.6, thinking it would finish in 20 minutes. It spun. And spun. Going in circles, unable to make the argument work. So I handed it to GPT-5.3Pro. It also spun — just in a different loop of confusion. At that point, having claimed the task, my pride was on the line. So I rolled up my sleeves and actually had to figure out what the hell was going on. It turned out we needed to slightly tweak the initial parameters (1000 lines above), modify the statement of the main lemma I was supposed to prove, and then repair all the downstream consequences 1000 lines later. It took two days. In the end it was ~700 lines, roughly a third human / Claude / GPT. It felt a bit like the transcontinental railroad, construction from both sides was supposed to meet in the middle, but the tracks were off by 200 miles. Maybe I would’ve gotten better results from the beginning by asking the AI to fix the entire 3000-line file. But a big, complicated file with shifting dependencies may simply be too much for even the largest models today. It didn’t help that earlier task-solvers had run into off-by-one bugs and updated the formal lemma statements without updating the blueprint. So by the time I got there, it was a mess. But we got through it. And if you want a concrete data point on where we are with autonomous mathematics, especially in Lean, this is one. I’m very bullish overall on Math+AI. But there’s still a LOT left to do.

English
25
20
328
56.3K
Post-scarcity society
Post-scarcity society@xclearcast·
Across the entire world, only one Korean community discovered this.
Post-scarcity society tweet mediaPost-scarcity society tweet media
English
0
0
1
65
Post-scarcity society
Post-scarcity society@xclearcast·
@thsottiaux The current state of Codex 5.3 is strange. It’s pretty clear that it has definitely been stealth-nerfed.
English
0
0
1
95
Tibo
Tibo@thsottiaux·
We brought more compute online in February to sustain Codex demand than we did in the entire period since its inception. It's been a fun challenge making it easier and more reliable over time and quite happy there hasn't been a major outage in a while (-> knocks on wood).
English
85
34
1.2K
96.1K
Sam Altman
Sam Altman@sama·
We have a special thing launching to Codex users on the Pro plan later today. It sparks joy for me. I think you are going to love it...
English
2.1K
342
9.3K
2.1M