Extended Brain

6.6K posts

Extended Brain banner
Extended Brain

Extended Brain

@Extended_Brain

A curious mind, always exploring the vast landscapes of knowledge and creativity. Passionate about learning, the future of AI, and philosophy

South East Asia 📍 انضم Nisan 2011
865 يتبع1.3K المتابعون
تغريدة مثبتة
Extended Brain
Extended Brain@Extended_Brain·
Life is Interpretation: Starting from Newman&Sarkar "Biology and Physics", which considers BMCs (Biomolecular Condensates): having 5 distinct simultaneous properties which cannot fit any existing physics 🧵
Extended Brain tweet media
English
1
0
0
73
Extended Brain أُعيد تغريده
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
Breaking! Claude Mythos and Capybara Accidentally Leaked According to reports from Fortune and The Information, Anthropic accidentally exposed a large cache of internal assets due to a CMS misconfiguration. These documents reveal the development of Claude Mythos, a model that marks the debut of a new high-performance tier called "Capybara." The Leaked Details Performance: It is positioned as a "step-change" above Claude Opus 4.6 (which was just released in February). It allegedly achieves significantly higher scores in software coding, academic reasoning, and cybersecurity. The "Capybara" Tier: This is a new, larger model class designed to be more "intelligent" and "connective" than the Opus tier. Safety & Cyber Risks: Anthropic describes the model as having "unprecedented" cybersecurity capabilities—to the point that it could "presage an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Release Strategy: Because of these risks and high compute costs, they are opting for a "gradual" rollout, starting with early-access for "cyber defenders" to help secure their codebases before the model (or similar ones) becomes widely available. Capybara (The Tier): Chosen for the animal’s "social bridge" nature, it marks a move toward "connective intelligence"—models that don't just solve tasks but link disparate domains. Mythos (The Model): Moving beyond a single "Opus" (masterpiece) to "Mythos" (a foundational worldview). It signals a model capable of understanding the deep, systemic "lore" of complex codebases and security infrastructures.
Carlos E. Perez tweet mediaCarlos E. Perez tweet media
English
27
83
980
98.4K
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
🧵 Andrej Karpathy (in a recent interview) is not really arguing that AI is a better tool. He is arguing that the structure of knowledge work is changing. His language is much more radical than most summaries suggest. 1/ “Code's not even the right verb anymore.” That line is doing a lot of work. He is not saying coding got faster. He is saying the old description of the activity no longer fits. 2/ What replaces it? “I have to express my will to my agents for 16 hours a day.” That is a different model of work: less direct execution, more specification, delegation, and supervision. 3/ Karpathy is also explicit that this is not hypothetical. “In December is when something really flipped. I went from 80/20 of writing code by myself versus delegating to agents, to like 20/80.” That is not an incremental gain. That is a regime change. 4/ He pushes the point further: “I don't even think I've typed a line of code probably since December, basically.” Whether or not others are there yet, his claim is clear: for at least some frontier users, the workflow has already changed. 5/ He thinks most people have not caught up to this fact. “I don't think a normal person actually realizes that this happened or how dramatic it was.” And then even more sharply: “Their default workflow of building software is completely different as of basically December.” 6/ What is the new bottleneck? Not intelligence, exactly. “It's limited by everything.” And then: “It's not that the capability is not there; it's that you just haven't found a way to string together what's available.” 7/ That is why he keeps returning to the phrase: “It's a skill issue.” Which sounds flippant, but his point is serious: the limiting factor is increasingly the human ability to structure tasks, write instructions, manage memory, and create good loops. 8/ He gives the clearest statement of where mastery is going: “Mastery looks like going ‘up the stack.’ It’s not about a single session; it’s about how multiple agents collaborate in teams.” This is the real shift. Not better autocomplete. Delegated cognition organized at a higher level. 9/ That is also why he asks: “How can I have not just a single session of Claude or Cursor... how can I have more of them?” He is already thinking beyond one assistant. The unit of productivity is becoming a system of agents, not a single chat window. 10/ He even reframes the economics of attention this way: “What token throughput do you command?” That is an extraordinary sentence. It suggests the scarce resource is no longer only human labor, but your ability to direct and absorb machine labor at scale. 11/ His smart-home example is not a gimmick. It supports a deeper thesis: “These smart home apps shouldn’t even exist in a sense; it should just be APIs and agents using them directly.” 12/ Then he says the really important part: “Agents are the glue of the intelligence.” And then: “The industry has to reconfigure because the customer is no longer the human; it’s the agent acting on behalf of the human.” 13/ That is a much larger claim than “AI improves UX.” It implies software may increasingly be designed for machine operators rather than direct human operation. Or in his words: “This refactoring will be substantial.” 14/ He extends the same logic into research. “I don't want to be the researcher looking at results. I want to arrange the objective once and hit ‘go.’” That is not copilot language. That is automation of the experimental loop itself. 15/ He is even blunter here: “We shouldn't be running these hyperparameter searches; we should be removing humans from the process.” 16/ And then this: “Researchers can contribute ideas, but they shouldn't be enacting them.” That may be one of the clearest formulations of his worldview in the whole transcript. Human role: propose objectives and ideas. Machine role: execute search and evaluation. 17/ He makes the organizational implication explicit too: “A research organization is essentially a set of markdown files describing roles.” That is a revealing abstraction. It treats institutions not as fixed human arrangements, but as programmable systems open to optimization. 18/ But he is not naive about where this works. “If you can't evaluate it, you can't auto-research it.” That is probably one of the most useful general principles in the interview. AI scales fastest where feedback is cheap, clear, and reliable. 19/ He also gives a sharp description of current-model limitations: “These models are still ‘jagged.’ I feel like I'm talking to a brilliant PhD student and a 10-year-old at the same time.” That is better than most benchmark discourse. Capability is not smooth. It is uneven, spiky, and domain-dependent. 20/ He does not think the future is one universal oracle either. “I think we should expect more ‘speciation.’” And: “We don't need one oracle that knows everything.” That points to an ecosystem of specialized models, not just one giant general system. 21/ On the economy, he is more expansionary than most doom narratives. “It's the Jevons Paradox. Software was scarce because it was too expensive. If it becomes cheaper, demand goes up.” 22/ And on the digital versus physical split: “Flipping bits is a million times faster than accelerating matter. The physical world will lag.” That is a clean way to understand where change comes first. 23/ His education comments may be the most underrated part. “I realized I shouldn't be explaining this to people; I should be explaining it to agents.” Then: “If the agent gets it, they can explain it to the human in any language with infinite patience.” 24/ And finally, the closing line that ties the whole worldview together: “You should have markdown documents for agents, not HTML for humans.” Then the punchline: “Your job is now the few bits that agents can't do.” 25/ That is the core thesis. Not just that AI will help people work. But that software, research, interfaces, and teaching may all be reorganized around agents as the primary operators. Karpathy’s language is not the language of assistance. It is the language of reconfiguration.
Carlos E. Perez tweet media
English
9
11
52
8.4K
Extended Brain أُعيد تغريده
AI at Meta
AI at Meta@AIatMeta·
Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2
English
541
1.6K
10.7K
3.6M
Extended Brain
Extended Brain@Extended_Brain·
We apply Pattee's epistemic cut to BMCs and consider the Write-Read-Rewrite loop in the Cell differentiation process, and then apply Biosemiotics ⬇️
Extended Brain tweet media
English
1
0
0
35
Extended Brain
Extended Brain@Extended_Brain·
Life is Interpretation: Starting from Newman&Sarkar "Biology and Physics", which considers BMCs (Biomolecular Condensates): having 5 distinct simultaneous properties which cannot fit any existing physics 🧵
Extended Brain tweet media
English
1
0
0
73
Extended Brain أُعيد تغريده
Ahmad
Ahmad@TheAhmadOsman·
Peter is wrong He needs to try MiniMax M2.7 and Qwen 3.5 27B in OpenClaw before making these comments
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
177
49
1.8K
229.1K
Extended Brain أُعيد تغريده
Sudo su
Sudo su@sudoingX·
the founder of openclaw joined the company that was founded to make AI open and now charges you per token. and is now telling you open models aren't there yet. i run qwen 3.5 27b on a single 3090. 50 tok/s. it writes code, handles tool calls, runs agent sessions for hours. the model built a full space shooter, 3,000+ lines, from a single prompt. i published the data. "open models aren't there yet" is what you say when your harness can't parse tool calls on local models and you blame the model instead of fixing the harness. i have the DMs. people switch from openclaw to hermes agent and their "broken" models suddenly work. pair a good model with a good harness like hermes agent where parsers are built per model. your data stays on your machine. no API key. 0 subscription. no one training their next model on your thinking. don't listen to someone with an OpenAI paycheck telling you open source can't do the job. install it. test it yourself. the receipts are on my timeline. he built a harness that couldn't handle local models and chose the API paycheck over fixing it. that should tell you everything.
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
259
405
5.3K
407.4K
Extended Brain
Extended Brain@Extended_Brain·
@PessoaBrain Check out my new post!. This examines Newman & Sarkar’s “Biology and Physics” and continues the themes of my earlier posts “The Flaw is Source Code” and “ The Genetic Text”. It proposes that Meaning is a Form of Matter, in the general sense. extendedbrain.substack.com/p/life-is-inte…
English
0
0
0
18
Luiz Pessoa
Luiz Pessoa@PessoaBrain·
𝗕𝗶𝗼𝗹𝗼𝗴𝘆 𝗮𝗻𝗱 𝗣𝗵𝘆𝘀𝗶𝗰𝘀 Looks very interesting. "We counterpose this view to ... reductionist physical theories of biological systems and highlight open questions regarding incompletely characterized and enigmatic forms of living matter. arxiv.org/abs/2603.11234…
English
1
20
98
6K
Extended Brain أُعيد تغريده
Justine Moore
Justine Moore@venturetwins·
Incredible clip on how @karpathy uses OpenClaw to run his house via texts. You can ask agents to find connected hardware at your home (like Sonos speaker), and they'll search the network + hack in for you 🤯 You can control music, lights, HVAC, security...w/o writing any code.
English
106
199
2.3K
342K
Extended Brain
Extended Brain@Extended_Brain·
@dwarkesh_sp @burny_tech Darwin's idea is simpler than Newton's, but biology is much more complicated (there are many more variables) than physics
English
0
0
2
28
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
The Origin of Species was published in 1859. Principia Mathematica was published in 1687, two centuries earlier. Conceptually, it seems like natural selection is much simpler than the theory of gravity. So why did it take two centuries longer to discover? A contemporary of Darwin's, Thomas Huxley, read the Origin of Species and said, “How extremely stupid not to have thought of that!” Nobody ever said the same for not beating Newton to the Principia. I wonder if the reason this happened is that Darwin’s cannot be decisively tested. The evidence is circumstantial, retrospective, and cumulative. There's no equivalent of Newton running the numbers on the moon's orbital period and radius, and confirming that it corresponds to his theory. In fact, nearly two thousand years before Darwin, the Roman poet Lucretius argued in De Rerum Natura that organisms suited to their environment survive while ill-adapted ones perish. But nobody built a science on it. Without a tight verification loop, the idea just floated by. Terence Tao argues that Darwin succeeded where Lucretius failed because he had the ability to convince people that the gaps in his theory (specifically, what is the mechanism of heredity) would be filled. This was less about ‘hard’ scientific insight, and more a matter of having good research taste and being persuasive. But it was crucial for progress in biology.
English
62
132
1.1K
175K
David Senra
David Senra@davidsenra·
Great men of history had little to no introspection. The personality that builds empires is not the same personality that sits around quietly questioning itself. @pmarca and I discuss what we both noticed but no one talks about: David: You don't have any levels of introspection? Marc: Yes, zero. As little as possible. David: Why? Marc: Move forward. Go! I found people who dwell in the past get stuck in the past. It's a real problem and it's a problem at work and it's a problem at home. David: So I've read 400 biographies of history’s greatest entrepreneurs and someone asked me what the most surprising thing I’ve learned from this was [and I answered] they have little or zero introspection. Sam Walton didn't wake up thinking about his internal self. He just woke up and was like: I like building Walmart. I'm going to keep building Walmart. I'm going to make more Walmarts. And he just kept doing it over and over again. Marc: If you go back 400 years ago it never would've occurred to anybody to be introspective. All of the modern conceptions around introspection and therapy, and all the things that kind of result from that are, a kind of a manufacture of the 1910s, 1920s. Great men of history didn't sit around doing this stuff. The individual runs and does all these things and builds things and builds empires and builds companies and builds technology. And then this kind of this kind of guilt based whammy kind of showed up from Europe. A lot of it from Vienna in 1910, 1920s, Freud and all that entire movement. And kind of turned all that inward and basically said, okay, now we need to basically second guess the individual. We need to criticize the individual. The individual needs to self criticize. The individual needs to feel guilt, needs to look backwards, needs to dwell in the past. It never resonated with me.
David Senra@davidsenra

My conversation with Marc Andreessen (@pmarca), co-founder of @a16z and Netscape. 0:00 Caffeine Heart Scare 0:56 Zero Introspection Mindset 3:24 Psychedelics and Founders 4:54 Motivation Beyond Happiness 7:18 Tech as Progress Engine 10:27 Founders Versus Managers 20:01 HP Intel Founder Legacy 21:32 Why Start the Firm 24:14 Venture Barbell Theory 28:57 JP Morgan Boutique Banking 30:02 Religion Split Wall Street 30:41 Barbell of Banking 31:42 Allen & Company Model 33:16 Planning the VC Firm 33:45 CAA Playbook Lessons 36:49 First Principles vs. Status Quo 39:03 Scaling Venture Capital 40:37 Private Equity and Mad Men 42:52 Valley Shifts to Full Stack 45:59 Meeting Jim Clark 48:53 Founder vs. Manager at SGI 54:20 Recruiting Dinner Story 56:58 Starting the Next Company 57:57 Nintendo Online Gamble 58:33 Building Mosaic Browser 59:45 NSFnet Commercial Ban 1:01:28 Eternal September Shift 1:03:11 Spam and Web Controversy 1:04:49 Mosaic Tech Support Flood 1:07:49 Netscape Business Model 1:09:05 Early Internet Skepticism 1:11:15 Moral Panic Pattern 1:13:08 Bicycle Face Story 1:14:48 Music Panic Examples 1:18:12 Lessons from Jim Clark 1:19:36 Clark Versus Barksdale 1:21:22 Tesla Versus Edison 1:23:00 Edison Digression Setup 1:23:13 AI Forecasting Myths 1:23:43 Edison Phonograph Lesson 1:25:11 Netscape Two Jims 1:29:11 Bottling Innovation 1:31:44 Elon Management Code 1:32:24 IBM Big Gray Cloud 1:37:12 Engineer First Truth 1:38:28 Bottlenecks and Speed 1:42:46 Milli Elon Metric 1:47:20 Starlink Side Project 1:49:10 Closing Includes paid partnerships.

English
1.3K
446
5.2K
2.7M
Extended Brain
Extended Brain@Extended_Brain·
this makes thing a lot easier. Thanks. "opencLaw onboard --auth-choice ollama" runs OpenClaw's official onboarding/setup wizard and selects Ollama as your model provider.
ollama@ollama

Ollama is now an official provider for OpenClaw. openclaw onboard --auth-choice ollama All models from Ollama will work seamlessly with OpenClaw. 🦞 Use it for the tasks you want, all from your chat app. Thank you @steipete for helping and reviewing. 🦞

English
0
0
0
99