Rob Dearborn

351 posts

Rob Dearborn banner
Rob Dearborn

Rob Dearborn

@RobDearborn

hobbyist

New York, USA Sumali Haziran 2010
2K Sinusundan365 Mga Tagasunod
Rob Dearborn
Rob Dearborn@RobDearborn·
i.e., the availability of a free and/or genius labor doesn't change much, at least first-order
English
0
0
3
12
Rob Dearborn
Rob Dearborn@RobDearborn·
There are many (most?) jobs where demand elasticity with respect to cost and/or intelligence is ~0.
English
1
0
3
15
Rob Dearborn
Rob Dearborn@RobDearborn·
And then onboard you
English
0
0
0
28
Rob Dearborn
Rob Dearborn@RobDearborn·
QR codes but for having your AI explain why you need something
English
1
0
3
46
Rob Dearborn
Rob Dearborn@RobDearborn·
I suppose almost all RL is the latter. No one’s training pirate personas to be agentic.
English
0
0
0
37
Rob Dearborn
Rob Dearborn@RobDearborn·
To the extent models are fundamentally simulation operating systems, it seems we should be able to limit them to running only the persona processes we want? Or at least optimize them for running these processes
Anthropic@AnthropicAI

To create Claude, Anthropic first makes something else: a highly sophisticated autocomplete engine. This autocomplete AI is not like a human, but it can generate stories about humans and other psychologically realistic characters.

English
1
0
0
87
Rob Dearborn
Rob Dearborn@RobDearborn·
The capacity’s locked in so you might as well lock in too
Rob Dearborn tweet media
English
0
0
0
65
Rob Dearborn
Rob Dearborn@RobDearborn·
G3F lacks Claude's soul, but its combo of smart+fast+cheap is really cognitively unburdening. You can just pepper it iteratively without thinking anything through.
English
0
0
0
38
Rob Dearborn
Rob Dearborn@RobDearborn·
@TheZvi Slightly smarter outputs than Opus but less token efficient and prone to overthinking, so better for oneshotting tasks (if you can wait) and worse for pairing
English
0
0
1
872
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
GPT-5.2 Reaction Thread.
English
51
2
101
40.2K
Rob Dearborn
Rob Dearborn@RobDearborn·
One gets the feeling they're compacting to latents. Cool!
Rob Dearborn tweet mediaRob Dearborn tweet media
English
1
0
0
62
Rob Dearborn
Rob Dearborn@RobDearborn·
Earlier this year a consensus formed that AI needs better continual learning to make the METR chart -> GDPval -> prosperity go up. I’ve come to believe that something even more foundational is missing: better executive function. Effective agents require it. Current models don’t have it. Attention is all you need, but the AIs have ADHD.
English
1
0
1
99
Rob Dearborn nag-retweet
Goodfire
Goodfire@GoodfireAI·
LLMs memorize a lot of training data, but memorization is poorly understood. Where does it live inside models? How is it stored? How much is it involved in different tasks? @jack_merullo_ & @srihita_raju's new paper examines all of these questions using loss curvature! (1/7)
Goodfire tweet media
English
11
135
813
192.2K
Rob Dearborn nag-retweet
Rob Dearborn
Rob Dearborn@RobDearborn·
Inducing some combo of investor/allocator uncertainty and demand for greater operational efficiency
English
0
0
1
32
Rob Dearborn
Rob Dearborn@RobDearborn·
IMO the correct way to value vertical agent cos long term is fundamentally as customer support + insurance + indemnification providers, and through this lens largely expect them to endure.
English
0
0
4
78