Extended Brain

6.6K posts

Extended Brain banner
Extended Brain

Extended Brain

@Extended_Brain

A curious mind, always exploring the vast landscapes of knowledge and creativity. Passionate about learning, the future of AI, and philosophy

South East Asia 📍 Присоединился Nisan 2011
865 Подписки1.3K Подписчики
Закреплённый твит
Extended Brain
Extended Brain@Extended_Brain·
Life is Interpretation: Starting from Newman&Sarkar "Biology and Physics", which considers BMCs (Biomolecular Condensates): having 5 distinct simultaneous properties which cannot fit any existing physics 🧵
Extended Brain tweet media
English
1
0
0
73
Extended Brain
Extended Brain@Extended_Brain·
@SciTechera Thanks for the article. Good news, but don't rush the conclusions, human experiments need to confirm them
English
0
0
0
1
SciTech Era
SciTech Era@SciTechera·
Memory Loss Breakthrough New study reverses memory loss by reactivating the gut–brain connection and achieving a full cognitive reset. Stanford researchers discovered that age-related decline may start in the gut, not the brain, and can potentially be reversed. This groundbreaking study revealed that aging gut bacteria can silence the vagus nerve, effectively "switching off" the brain's memory center. Researchers found that specific microbes, particularly Parabacteroides goldsteinii, produce metabolites that trigger intestinal inflammation. This inflammation interferes with vagus-nerve signaling, reducing communication between the gut and brain and weakening activity in the hippocampus, the brain's memory center. By restoring vagus-nerve activity and correcting the gut microbiome, scientists were able to make the brains of old mice function like those of 2-months old mice. This "remote control" strategy suggests that memory loss may not be an inevitable brain disease, but a communication failure that can potentially be repaired through the digestive system.
SciTech Era tweet media
English
27
349
1.3K
79.1K
Extended Brain ретвитнул
Bull Theory
Bull Theory@BullTheoryio·
BREAKING: Anthropic accidentally leaked its next AI model and it just wiped out $14.5 billion from cybersecurity stocks in a single day. Claude Mythos was accidentally stored in a publicly accessible data cache and discovered before Anthropic could announce it. The model showed dramatically higher scores on cybersecurity tests, meaning AI can now detect and respond to threats at a level that traditionally required entire teams of security professionals and expensive enterprise software. Investors immediately started pricing in the question nobody in the industry wants to answer: if an AI model can do this, why does anyone need CrowdStrike? And the market answered immediately: - CrowdStrike is down 5.85%, wiping out $5.5 billion. - Palo Alto Networks is down 6.43%, wiping out $7.5 billion. - Zscaler is down 5.89%, wiping out $1.35 billion. - Tenable is down 9.70%, wiping out $185 million
Bull Theory tweet media
English
253
695
4.1K
705.2K
Extended Brain ретвитнул
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
Breaking! Claude Mythos and Capybara Accidentally Leaked According to reports from Fortune and The Information, Anthropic accidentally exposed a large cache of internal assets due to a CMS misconfiguration. These documents reveal the development of Claude Mythos, a model that marks the debut of a new high-performance tier called "Capybara." The Leaked Details Performance: It is positioned as a "step-change" above Claude Opus 4.6 (which was just released in February). It allegedly achieves significantly higher scores in software coding, academic reasoning, and cybersecurity. The "Capybara" Tier: This is a new, larger model class designed to be more "intelligent" and "connective" than the Opus tier. Safety & Cyber Risks: Anthropic describes the model as having "unprecedented" cybersecurity capabilities—to the point that it could "presage an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Release Strategy: Because of these risks and high compute costs, they are opting for a "gradual" rollout, starting with early-access for "cyber defenders" to help secure their codebases before the model (or similar ones) becomes widely available. Capybara (The Tier): Chosen for the animal’s "social bridge" nature, it marks a move toward "connective intelligence"—models that don't just solve tasks but link disparate domains. Mythos (The Model): Moving beyond a single "Opus" (masterpiece) to "Mythos" (a foundational worldview). It signals a model capable of understanding the deep, systemic "lore" of complex codebases and security infrastructures.
Carlos E. Perez tweet mediaCarlos E. Perez tweet media
English
33
96
1.1K
104.3K
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
🧵 Andrej Karpathy (in a recent interview) is not really arguing that AI is a better tool. He is arguing that the structure of knowledge work is changing. His language is much more radical than most summaries suggest. 1/ “Code's not even the right verb anymore.” That line is doing a lot of work. He is not saying coding got faster. He is saying the old description of the activity no longer fits. 2/ What replaces it? “I have to express my will to my agents for 16 hours a day.” That is a different model of work: less direct execution, more specification, delegation, and supervision. 3/ Karpathy is also explicit that this is not hypothetical. “In December is when something really flipped. I went from 80/20 of writing code by myself versus delegating to agents, to like 20/80.” That is not an incremental gain. That is a regime change. 4/ He pushes the point further: “I don't even think I've typed a line of code probably since December, basically.” Whether or not others are there yet, his claim is clear: for at least some frontier users, the workflow has already changed. 5/ He thinks most people have not caught up to this fact. “I don't think a normal person actually realizes that this happened or how dramatic it was.” And then even more sharply: “Their default workflow of building software is completely different as of basically December.” 6/ What is the new bottleneck? Not intelligence, exactly. “It's limited by everything.” And then: “It's not that the capability is not there; it's that you just haven't found a way to string together what's available.” 7/ That is why he keeps returning to the phrase: “It's a skill issue.” Which sounds flippant, but his point is serious: the limiting factor is increasingly the human ability to structure tasks, write instructions, manage memory, and create good loops. 8/ He gives the clearest statement of where mastery is going: “Mastery looks like going ‘up the stack.’ It’s not about a single session; it’s about how multiple agents collaborate in teams.” This is the real shift. Not better autocomplete. Delegated cognition organized at a higher level. 9/ That is also why he asks: “How can I have not just a single session of Claude or Cursor... how can I have more of them?” He is already thinking beyond one assistant. The unit of productivity is becoming a system of agents, not a single chat window. 10/ He even reframes the economics of attention this way: “What token throughput do you command?” That is an extraordinary sentence. It suggests the scarce resource is no longer only human labor, but your ability to direct and absorb machine labor at scale. 11/ His smart-home example is not a gimmick. It supports a deeper thesis: “These smart home apps shouldn’t even exist in a sense; it should just be APIs and agents using them directly.” 12/ Then he says the really important part: “Agents are the glue of the intelligence.” And then: “The industry has to reconfigure because the customer is no longer the human; it’s the agent acting on behalf of the human.” 13/ That is a much larger claim than “AI improves UX.” It implies software may increasingly be designed for machine operators rather than direct human operation. Or in his words: “This refactoring will be substantial.” 14/ He extends the same logic into research. “I don't want to be the researcher looking at results. I want to arrange the objective once and hit ‘go.’” That is not copilot language. That is automation of the experimental loop itself. 15/ He is even blunter here: “We shouldn't be running these hyperparameter searches; we should be removing humans from the process.” 16/ And then this: “Researchers can contribute ideas, but they shouldn't be enacting them.” That may be one of the clearest formulations of his worldview in the whole transcript. Human role: propose objectives and ideas. Machine role: execute search and evaluation. 17/ He makes the organizational implication explicit too: “A research organization is essentially a set of markdown files describing roles.” That is a revealing abstraction. It treats institutions not as fixed human arrangements, but as programmable systems open to optimization. 18/ But he is not naive about where this works. “If you can't evaluate it, you can't auto-research it.” That is probably one of the most useful general principles in the interview. AI scales fastest where feedback is cheap, clear, and reliable. 19/ He also gives a sharp description of current-model limitations: “These models are still ‘jagged.’ I feel like I'm talking to a brilliant PhD student and a 10-year-old at the same time.” That is better than most benchmark discourse. Capability is not smooth. It is uneven, spiky, and domain-dependent. 20/ He does not think the future is one universal oracle either. “I think we should expect more ‘speciation.’” And: “We don't need one oracle that knows everything.” That points to an ecosystem of specialized models, not just one giant general system. 21/ On the economy, he is more expansionary than most doom narratives. “It's the Jevons Paradox. Software was scarce because it was too expensive. If it becomes cheaper, demand goes up.” 22/ And on the digital versus physical split: “Flipping bits is a million times faster than accelerating matter. The physical world will lag.” That is a clean way to understand where change comes first. 23/ His education comments may be the most underrated part. “I realized I shouldn't be explaining this to people; I should be explaining it to agents.” Then: “If the agent gets it, they can explain it to the human in any language with infinite patience.” 24/ And finally, the closing line that ties the whole worldview together: “You should have markdown documents for agents, not HTML for humans.” Then the punchline: “Your job is now the few bits that agents can't do.” 25/ That is the core thesis. Not just that AI will help people work. But that software, research, interfaces, and teaching may all be reorganized around agents as the primary operators. Karpathy’s language is not the language of assistance. It is the language of reconfiguration.
Carlos E. Perez tweet media
English
9
12
53
8.5K
Extended Brain ретвитнул
AI at Meta
AI at Meta@AIatMeta·
Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2
English
547
1.7K
10.9K
3.7M
Extended Brain
Extended Brain@Extended_Brain·
We apply Pattee's epistemic cut to BMCs and consider the Write-Read-Rewrite loop in the Cell differentiation process, and then apply Biosemiotics ⬇️
Extended Brain tweet media
English
1
0
0
35
Extended Brain
Extended Brain@Extended_Brain·
Life is Interpretation: Starting from Newman&Sarkar "Biology and Physics", which considers BMCs (Biomolecular Condensates): having 5 distinct simultaneous properties which cannot fit any existing physics 🧵
Extended Brain tweet media
English
1
0
0
73
Extended Brain ретвитнул
Ahmad
Ahmad@TheAhmadOsman·
Peter is wrong He needs to try MiniMax M2.7 and Qwen 3.5 27B in OpenClaw before making these comments
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
177
49
1.8K
229.2K
Extended Brain ретвитнул
Sudo su
Sudo su@sudoingX·
the founder of openclaw joined the company that was founded to make AI open and now charges you per token. and is now telling you open models aren't there yet. i run qwen 3.5 27b on a single 3090. 50 tok/s. it writes code, handles tool calls, runs agent sessions for hours. the model built a full space shooter, 3,000+ lines, from a single prompt. i published the data. "open models aren't there yet" is what you say when your harness can't parse tool calls on local models and you blame the model instead of fixing the harness. i have the DMs. people switch from openclaw to hermes agent and their "broken" models suddenly work. pair a good model with a good harness like hermes agent where parsers are built per model. your data stays on your machine. no API key. 0 subscription. no one training their next model on your thinking. don't listen to someone with an OpenAI paycheck telling you open source can't do the job. install it. test it yourself. the receipts are on my timeline. he built a harness that couldn't handle local models and chose the API paycheck over fixing it. that should tell you everything.
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
259
405
5.3K
407.6K
Extended Brain
Extended Brain@Extended_Brain·
@PessoaBrain Check out my new post!. This examines Newman & Sarkar’s “Biology and Physics” and continues the themes of my earlier posts “The Flaw is Source Code” and “ The Genetic Text”. It proposes that Meaning is a Form of Matter, in the general sense. extendedbrain.substack.com/p/life-is-inte…
English
0
0
0
18
Luiz Pessoa
Luiz Pessoa@PessoaBrain·
𝗕𝗶𝗼𝗹𝗼𝗴𝘆 𝗮𝗻𝗱 𝗣𝗵𝘆𝘀𝗶𝗰𝘀 Looks very interesting. "We counterpose this view to ... reductionist physical theories of biological systems and highlight open questions regarding incompletely characterized and enigmatic forms of living matter. arxiv.org/abs/2603.11234…
English
1
20
98
6K
Extended Brain ретвитнул
Justine Moore
Justine Moore@venturetwins·
Incredible clip on how @karpathy uses OpenClaw to run his house via texts. You can ask agents to find connected hardware at your home (like Sonos speaker), and they'll search the network + hack in for you 🤯 You can control music, lights, HVAC, security...w/o writing any code.
English
106
199
2.3K
342.1K
Extended Brain
Extended Brain@Extended_Brain·
@dwarkesh_sp @burny_tech Darwin's idea is simpler than Newton's, but biology is much more complicated (there are many more variables) than physics
English
0
0
2
28
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
The Origin of Species was published in 1859. Principia Mathematica was published in 1687, two centuries earlier. Conceptually, it seems like natural selection is much simpler than the theory of gravity. So why did it take two centuries longer to discover? A contemporary of Darwin's, Thomas Huxley, read the Origin of Species and said, “How extremely stupid not to have thought of that!” Nobody ever said the same for not beating Newton to the Principia. I wonder if the reason this happened is that Darwin’s cannot be decisively tested. The evidence is circumstantial, retrospective, and cumulative. There's no equivalent of Newton running the numbers on the moon's orbital period and radius, and confirming that it corresponds to his theory. In fact, nearly two thousand years before Darwin, the Roman poet Lucretius argued in De Rerum Natura that organisms suited to their environment survive while ill-adapted ones perish. But nobody built a science on it. Without a tight verification loop, the idea just floated by. Terence Tao argues that Darwin succeeded where Lucretius failed because he had the ability to convince people that the gaps in his theory (specifically, what is the mechanism of heredity) would be filled. This was less about ‘hard’ scientific insight, and more a matter of having good research taste and being persuasive. But it was crucial for progress in biology.
English
62
132
1.1K
175.1K