Jon BeltranDeHeredia
3.6K posts

Jon BeltranDeHeredia
@jonbho
I like algebra, software, music, and other fine arts.
Muga Katılım Mart 2009
2.1K Takip Edilen1.2K Takipçiler

@petergyang #mac" target="_blank" rel="nofollow noopener">buyersguide.macrumors.com/#mac is a useful reference re when to buy or wait Apple products.
English

The @DarioAmodei interview.
0:00:00 - What exactly are we scaling?
0:12:36 - Is diffusion cope?
0:29:42 - Is continual learning necessary?
0:46:20 - If AGI is imminent, why not buy more compute?
0:58:49 - How will AI labs actually make profit?
1:31:19 - Will regulations destroy the boons of AGI?
1:47:41 - Why can’t China and America both have a country of geniuses in a datacenter?
Look up Dwarkesh Podcast on Youtube, Spotify, Apple Podcasts, etc.
English
Jon BeltranDeHeredia retweetledi

AI is cool and all... but a new paper in @ScienceMagazine kind of figured out the origin of life?
The paper reports the discovery of a simple 45-nucleotide RNA molecule that can perfectly copy itself.

English

@manast Thanks Manuel! I managed to pop up the keymapper which I didn't know existed... still my gametools version wants PrintScr for activation and I'm not succeeding at sending it for some reason, so no pop up for me. I'll try to track down another version of gametools and see!
English

@jonbho You have to use the mapper editor. Choose the key you want to use instead, for example I used the key under the ESC which is never used for anything almost, the clicked on add and then in my case the * in your case the PrintPage or whatever.

English
Jon BeltranDeHeredia retweetledi

Claude Code Soviet Edition
Stealing the productivity tip from @delba_oliveira but adopting it to my own childhood obsession: C&C Red Alert 2. I wonder how fast this gets old
github.com/tcz/claude-cod…
English

@brankopetric00 My experience is that Graviton cost is significantly lower only for instances w/o local SSD storage (bummer), and CPU performance is slower than x86_64 (r7g clearly slower than r7a etc). Native highly parallel compute/database engine in C++ using LLVM JIT compilation.
English

Assumed Graviton instances would save us 20% on everything.
Reality was more nuanced:
- Compute-bound workloads: 25% savings
- Memory-bound workloads: 15% savings
- Our Java app with JIT compilation: 5% savings initially
Had to tune JVM flags for ARM architecture to get real benefits.
Graviton savings depend on workload type. Test with your actual applications, not benchmarks.
English

@EMostaque Might be the other way around. Model providers are heavily subsidizing users and losing money hand over fist. If there's a big correction, it will make investors shy away from providing the capital for that. And if demand remains through the roof, they might just raise prices.
English

@thiagotm Hey, interesting write up. Any chance you'd share more details about the app/extension/web dashboard which helped solve the problem? Asking for a friend...
English


@YossiKreinin The vi emulation in vs code is pretty good...
English
Jon BeltranDeHeredia retweetledi

🔥 MIT just exposed every top AI model and it’s not pretty.
They built a new test called WorldTest to see if AI actually understands the world… and the results are brutal.
It doesn’t just check how well a model predicts the next frame or maximizes reward it tests whether it can build an internal model of reality and use it to handle new situations.
They built AutumnBench 43 interactive worlds, 129 tasks where AIs must:
• Predict hidden parts of the world (masked-frame prediction)
• Plan sequences of actions to reach a goal
• Detect when the environment’s rules suddenly change
Then they tested 517 humans vs. Claude, Gemini 2.5 Pro, and o3.
Humans crushed every model. Even massive compute scaling barely helped.
The takeaway is wild.. today’s AIs don’t understand environments; they just pattern-match inside them.
They don’t explore strategically, revise beliefs, or run experiments like humans do.
WorldTest might be the first benchmark that actually measures understanding, not memorization.
The gap it reveals isn’t small it’s the next grand challenge in AI cognition.
(Comment “Send” I’ll DM you the paper)

English

@catalinmpit Drop your ego and start doing smaller projects that fit into your attention window. Rinse and repeat until it's a habit. Then work on expanding the attention window.
English
Jon BeltranDeHeredia retweetledi











