Thomas Alan
4.4K posts

Thomas Alan
@1thomasalan
Creative technologist • Photographer • Evolving human in Japan.

Replit CEO Amjad Masad on why the most ambitious employees are no longer blocked by engineering: "How can I upgrade my workforce for them to become generalist business people that can wield AI for the benefit of our customers and our bottom line?" "The most ambitious people are creating billions of dollars of value of their company." "There are people that are closest to the customer that feel that they have ideas that could make more revenue for the business, but they're often blocked by engineering." "Now a lot of them are bringing in Replit to work. They're building that idea, and they're making the money for their company. And then they're getting promoted, and then given more power." "I'm going to build a team of vibe coders that are going to go around the company and find all the inefficiencies and go solve them." "So we have a new role of this generalist automator." @amasad with @jackhneel

New paper out: AI Must Embrace Specialization via Superhuman Adaptable Intelligence With @JudahGoldfeder, Philippe Wyder, and @ylecun . There is quite a lot of buzz on our paper, so here is my take. Everyone's talking about AGI, but nobody agrees on what it means, and that confusion is actively hurting the field. We surveyed the most prominent definitions and mapped them along two axes: the kind of capability they refer to (learning vs. doing) and the scope (anything, anything important, anything humans can do). The result is a landscape of definitions that don't just disagree, but they're often internally inconsistent. Our starting point is simple: human intelligence is not general. We are specialized creatures, shaped by evolution to excel at a narrow set of tasks critical for survival. We feel general because we can't see our own blind spots. Magnus Carlsen is the greatest human chess player ever, but compared to what's computationally achievable, he's not actually good at chess. That's not a knock on Magnus. It's a statement about the limits of human adaptation, and why anchoring AI's North Star to human-level performance is the wrong move. We propose the term Superhuman Adaptable Intelligence (SAI), or, in other words, intelligence that can learn to exceed humans at anything important we can do and can also tackle tasks entirely outside the human domain. The metric isn't a growing checklist of benchmarks. It's adaptation speed: how fast can a system acquire a new skill? This has concrete implications for how we build. SAI points toward self-supervised learning for acquiring generic knowledge from unlabeled data, and world models for planning and zero-shot transfer. It also pushes back against the current monoculture of autoregressive architectures, because specialization demands architectural diversity, not one paradigm to rule them all. Or as we put it: the AI that folds our proteins should not be the AI that folds our laundry. This paper grew out of a conversation with Yann on our The Information Bottleneck podcast, which led to a public exchange with @elonmusk and @demishassabis on X (not every paper can cite a Twitter feud as source material).

























