Nathan Smith

702 posts

Nathan Smith banner
Nathan Smith

Nathan Smith

@nhsmith

Katılım Aralık 2008
1.9K Takip Edilen144 Takipçiler
Julian Ciccale
Julian Ciccale@foliotrail·
It cracks me up seeing people say “intelligence is now a commodity.” Sure, maybe it is. But that does not mean everyone knows how to use it. Getting in shape is also “available to everyone.” Yet most people are not in shape. Imagine every family suddenly had an Einstein in the basement. Do you really think they would ask the right questions or use that advantage properly? Of course not. We are heading straight into a K-shaped outcome. Those who know how to leverage it will pull further ahead. The rest will not.
English
2
0
2
393
Nathan Smith
Nathan Smith@nhsmith·
@wandb Very nice! Much appreciated. :) I used a third-party app for a while but expecting this to be much better.
English
1
0
1
102
Weights & Biases
We heard you. The wandb mobile app is now LIVE on iOS 🚀 Monitor training runs from anywhere. Crash alerts the second something breaks. Live metrics on your phone. This has been the most requested feature in wandb history and it's finally here!
English
13
31
261
528K
Nathan Smith
Nathan Smith@nhsmith·
@alexgshaw Just needs to be an OpenAI-compatible api so set api key and base url to point to a different provider like openrouter, cerebras, etc.
English
0
0
0
91
Alex Shaw
Alex Shaw@alexgshaw·
@nhsmith Cool! Still locks in to OpenAI right?
English
1
0
0
161
Alex Shaw
Alex Shaw@alexgshaw·
Is there a non-Anthropic-specific alternative to Claude Agent SDK? We've started using it quite a bit in Harbor, but some users don't want the Anthropic lock in so I'm exploring open alternatives.
English
29
2
39
10.6K
Nathan Smith
Nathan Smith@nhsmith·
@trq212 Really appreciate your posts, Thariq! Always really insightful.
English
0
0
1
62
Thariq
Thariq@trq212·
one of the biggest realizations I've had working on Claude Code is that you fundamentally have to design agents for prompt caching first, almost every feature touches on it somehow I wrote this in a day but it's the culmination of months of learnings, hope you enjoy it
Thariq@trq212

x.com/i/article/2024…

English
97
262
4.4K
925.2K
Bill Ackman
Bill Ackman@BillAckman·
Check out @AlphaSchoolATX
Teslaconomics@Teslaconomics

I might get some pushback for this, but I honestly think a lot of parents, especially in places like Silicon Valley and especially many Asian parents, are training their kids for the wrong world. I see kids at the age of 7-8 packed with after-school math, more reading, more test prep, with the goal to make them “smarter.” But from my perspective, living deep in the AI world every single day, I’m pretty sure raw intelligence is about to become a commodity. Very soon, AI is going to do math better than the best mathematician, it’ll diagnose better than top doctors around the world, it’ll draft contracts better than elite lawyers, and it’ll learn faster than any PhD, instantly, endlessly, and without any fatigue. All of that knowledge will live right in your pocket. So think about it… if we’re raising kids to win by being “the smartest in the room,” we’re really training them for something that’s already being replaced. In my opinion, this is a waste of time, $, and effort. What I focus on with my kids is very different. I care about willpower. I care about passion. I care about loving something enough to stick with it, especially when it feels hard. And as a Dad, my job is to support that, whatever it is, and teach them to never give up. I could be totally wrong though… But when I look at where AI is headed, I don’t think the future belongs to the kid who memorized the most formulas or did the most math problems, etc. In the future, I think the winners are going to be kids who 1/ can push through frustration 2/ can stay curious 3/ can keep going deeper into their passions than others 4/ can use AI tools to build cool things 5/ has the will power to never give up In this day and age, school doesn’t really teach this and I don’t think after-school classes teach that either. I don’t think any of this can really be taught at school tbh, it’s something that is developed inside the home through the environment we as parents cultivate. In a world where AI will help you build anything, create anything, and learn anything instantly, I don’t think the real edge will be intelligence anymore like the past. The edge will come down to grit, discipline, emotional strength, and to keep going as others quit. AI will be so deeply woven into our kids’ lives whether we like it or not. That part is unavoidable. However, what is avoidable is raising kids who only know how to follow instructions, chase grades, and wait for approval. I always tell my kids, I don’t care what grade you get in a test. I care that you know what you got wrong, why you got it wrong, and what you’re doing to avoid that mistake in the future. Because I firmly believe in the future, the kids who will thrive the most will be the ones who want something badly enough to go after it, who aren’t afraid to fail, and those who know how to leverage AI. Just my two cents. But if we’re serious about the future, I think it’s time parents start training for that world, NOT the one we grew up in.

English
50
30
556
263.7K
Nathan Smith retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Jensen Huang says nothing would give him more joy than if none of his engineers were coding at all. Instead, they’re just solving undiscovered problems. His framework is 'Purpose vs Task' - coding is just a task, that should be minimized (ideally to 0).
English
74
252
3.2K
285.7K
Nathan Smith retweetledi
METR
METR@METR_Evals·
We estimate that, on our tasks, Claude Opus 4.5 has a 50%-time horizon of around 4 hrs 49 mins (95% confidence interval of 1 hr 49 mins to 20 hrs 25 mins). While we're still working through evaluations for other recent models, this is our highest published time horizon to date.
METR tweet media
English
67
270
2K
1.3M
Tech with Mak
Tech with Mak@techNmak·
These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of LLM interview questions - shared by Hao Hoang Want this doc? Follow @techNmak and comment “LLM” - I’ll send it over.
Tech with Mak tweet media
English
1.4K
501
4.3K
407.8K
Nathan Smith retweetledi
Devendra Chaplot
Devendra Chaplot@dchaplot·
Tinker is now open to everyone! We are also adding: - Vision support with Qwen3-VL - New model: Kimi K2 Thinking (1T params) - OpenAI API-compatible inference Start training models within minutes: thinkingmachines.ai/blog/tinker-ge…
Thinking Machines@thinkymachines

Tinker is now generally available. We also added support for advanced vision input models, Kimi K2 Thinking, and a simpler way to sample from models. thinkingmachines.ai/blog/tinker-ge…

English
14
33
548
134.2K
Nathan Smith retweetledi
Ron Paul
Ron Paul@RonPaul·
Beware the slippery slope to Gestapo. Freedom of speech is not granted by the government. Non-citizens in America have freedom of speech.
English
1.3K
2.9K
16.2K
1.1M
Nathan Smith retweetledi
METR
METR@METR_Evals·
When will AI systems be able to carry out long projects independently? In new research, we find a kind of “Moore’s Law for AI agents”: the length of tasks that AIs can do is doubling about every 7 months.
METR tweet media
English
166
864
4.7K
8.6M
Nathan Smith retweetledi
Rep. Jason Crow
Rep. Jason Crow@RepJasonCrow·
Secretary Hegseth needs to resign.
English
3.5K
4.2K
20.8K
912.4K
Nathan Smith
Nathan Smith@nhsmith·
@BillAckman Don’t try to turn this on the journalist @BillAckman. The incompetent government officials are the only ones to blame.
English
0
0
3
38
Bill Ackman
Bill Ackman@BillAckman·
If a journalist finds himself inadvertently included in a chat group with senior military and civilian leadership about a military operation, should he continue to listen or sign off? In the alternative, if a journalist overhears a conversation in the adjoining hotel room which begins to reveal critical U.S. intelligence, should he listen more carefully to learn as much as possible, or should he step away so the intelligence remains private? Should he reveal that the intelligence breach occurred if it harms our country, or should he share it with the world? What do journalistic ethical standards say on the above?
English
2.2K
1K
11.3K
1.1M
Nathan Smith
Nathan Smith@nhsmith·
@elonmusk Why don’t you personally adhere to the same principles?
English
0
0
1
92
Elon Musk
Elon Musk@elonmusk·
Extremely important difference
Elon Musk tweet media
English
17.9K
33.5K
379.7K
69.8M
Nathan Smith retweetledi
Alex Horton
Alex Horton@AlexHortonTX·
X has purged community notes from this Pentagon account that posts about transparency and accountability. 1: At 12:06am 2: At 9:44am
Alex Horton tweet mediaAlex Horton tweet media
English
266
3.7K
16.6K
677.8K
Nathan Smith retweetledi
Brit Hume
Brit Hume@brithume·
Oh for God's sake, the administration has already confirmed the authenticity of the message.
DOW Rapid Response@DOWResponse

. @SecDef response to the @TheAtlantic article…. “You’re talking about a deceitful and highly discredited “so-called journalist”

English
4.7K
12.8K
106.8K
6.7M
Nathan Smith retweetledi
Mike Knoop
Mike Knoop@mikeknoop·
The chart below is IMO the most important thing we published today (h/t @bryanlanders). It shows that "scaling up" existing ideas, even the latest AI reasoning systems with log-linear accuracy/compute characteristics, is insufficient for AGI. We still need some architectural or algorithmic changes to reach AGI.
ARC Prize@arcprize

Intelligence isn't just capability; it's efficiency. We can no longer report performance as a single metric. Going forward our leaderboard will track the *cost* of performance as a first class citizen. ARC-AGI-2 is showing material resistance over ARC-AGI-1 towards reasoning models. It is unknown what scale, if any, would reach human performance.

English
7
14
129
11.1K