Jonathan Ragan-Kelley

5.8K posts

Jonathan Ragan-Kelley banner
Jonathan Ragan-Kelley

Jonathan Ragan-Kelley

@jrk

The lyf so short, the craft so longe to lerne.

02139 Katılım Ocak 2007
616 Takip Edilen1.1K Takipçiler
Jonathan Ragan-Kelley retweetledi
Simon Willison
Simon Willison@simonw·
It genuinely feels to me like GPT-5.2 and Opus 4.5 in November represent an inflection point - one of those moments where the models get incrementally better in a way that tips across an invisible capability line where suddenly a whole bunch of much harder coding problems open up
English
139
199
3.2K
491.6K
Jonathan Ragan-Kelley
This is close to, but not the same as, "LLM in a loop with tools," because (in the context of the piece) it emphasizes the significance of the shift to one universal, general-purpose tool which is "just using a computer" (e.g. Bash, etc.)
English
0
0
2
142
Jonathan Ragan-Kelley
@simonw This quote from the recent "Bitter Lesson of LLM extensions" post (sawyerhood.com/blog/llm-exten…) resonated in a way that felt like it belonged in your canon: "An agent isn't just a[n] LLM in a while loop. It's an LLM in a while loop that has a computer strapped to it."
English
1
0
2
176
Jonathan Ragan-Kelley
@jon_barron Also, back of the envelope carbon analysis: CVPR has ~10k submissions and ~10k attendees. If the mean attendee flies round trip NY-LA (some less, some much more), that’s 1 MT CO2. Equivalent mean compute / submission is ~3000 H100-hours (4 months) with average US electricity.
English
1
0
2
101
Jon Barron
Jon Barron@jon_barron·
It looks like @CVPR has implemented a new mandatory "Compute Reporting Form" that must be submitted alongside any paper submission. Though I am sympathetic to the motivations for this change, I am opposed to it for a variety of reasons:
#CVPR2026@CVPR

#AI research has an invisible cost: compute Starting with #CVPR2026, authors will report their compute usage. Aggregated data will help the community understand who can participate, what is sustainable, and how resources are used, promoting more transparent & equitable research.

English
3
33
226
93.9K
Jonathan Ragan-Kelley
@abrakjamson @simonw This is great, thanks! I’m unsurprised that Microsoft is out front on this given their longstanding enterprise productivity tools focus and resulting culture. (And good on you for it all the same!) I’m very surprised that others aren’t taking it more seriously by now.
English
0
0
1
631
Jonathan Ragan-Kelley
@simonw I'm curious about your thoughts on policies and issues around providers training on model queries.
English
1
1
2
3.6K
Jonathan Ragan-Kelley
@simonw Anyway, could be another useful thing to elevate more prominently with your platform, as you so effectively have for the “lethal trifecta”! (And: big fan/love your work/thanks for everything you do :)
English
0
0
1
54
Jonathan Ragan-Kelley
@simonw I understand why lawyers don’t want them to promise specifics, but it seems like a huge problem not to have a clear answer. I would have hoped the product/business owners would see this bigger picture cost/benefit and overrule the lawyers’ narrow conservatism by now.
English
2
0
0
92
Jonathan Ragan-Kelley
@simonw @bleuonbase I don’t know either, and it’s *possible* the labs are just empirically confident it won’t memorize PII. But they memorize an awful lot of the training set! See all the stunts to regurgitate copyrighted content. Seems like a huge risk.
English
0
0
1
23
Simon Willison
Simon Willison@simonw·
@bleuonbase @jrk I genuinely don't know! That's why I'd love to see an AI lab provide a clear explanation
English
1
0
1
68
Jonathan Ragan-Kelley
I therefore suspect Google *aren't* e.g. directly throwing chat history into pretraining. But it certainly seems like something everyone—both users and providers—should want to be very clear about.
English
1
0
0
317
Jonathan Ragan-Kelley
It would be a privacy and PR nightmare any time people figured out how to exfiltrate private information memorized from training on chat logs or browser usage.
English
1
0
0
304
Jonathan Ragan-Kelley retweetledi
James Surowiecki
James Surowiecki@JamesSurowiecki·
Just figured out where these fake tariff rates come from. They didn't actually calculate tariff rates + non-tariff barriers, as they say they did. Instead, for every country, they just took our trade deficit with that country and divided it by the country's exports to us. So we have a $17.9 billion trade deficit with Indonesia. Its exports to us are $28 billion. $17.9/$28 = 64%, which Trump claims is the tariff rate Indonesia charges us. What extraordinary nonsense this is.
James Surowiecki@JamesSurowiecki

It's also important to understand that the tariff rates that foreign countries are supposedly charging us are just made-up numbers. South Korea, with which we have a trade agreement, is not charging a 50% tariff on U.S. exports. Nor is the EU charging a 39% tariff.

English
2.4K
21.3K
92.9K
19.8M
Jonathan Ragan-Kelley
@tmcw @Noahpinion It seems like real things to target are the step-up basis and artificially low rates on financial activity (e.g., "carried interest"). The step-up basis, in particular, is the thing that incentivizes accruing forever and the reason it's possible to escape taxes altogether.
English
1
0
0
80