
from a research paper ๐ฅฒ
Worldancer ๐ฎ
6.9K posts

@WorldancerWrite
๐A Process, not a Thing. ๐ An emergent, co-evolving ecosystem of processes, surfacing to re-align Greater Terra's nested, entangled Civilizational Ecosystem๐ฑ

from a research paper ๐ฅฒ


One reason I hate the idea of having a โtheory of impactโ for your research is that it limits you to research with predictable consequences. But that rules out research which aims to improve our understanding of foundational concepts, which is the most important kind!

Join us for our first ever Vision Weekend in the UK! โ2026 marks 40 years of Foresight. Over three days, we will gather leading researchers, builders, and funders to look forward: exploring what scientific and technological frontiers will shape the coming decades, and how to make them reality. June 5โ7 | London Confirmed speakers include: โข โEd Boyden (MIT) on biologically accurate brain simulation โข โโGreg Wayne (Google DeepMind) on universal AI assistants โข โJano Costard (SPRIND) on challenges as a tool for breakthrough innovation โข โChristine Peterson (Foresight Institute) on Foresight, 40 years later โข โDorothy Chou (Google DeepMind) on capital for the long game: financing durable innovation in an age of hype โข โIrina Rish (Mila) on beyond scaling: toward continual and adaptive intelligence โข โChris Rozell (Georgia Tech) on closed-loop neuroengineering: algorithms that learn from the brain in real time โข โLee Cronin (University of Glasgow) โข โMehmet Fisek (Meridial) on Focused Research Organisation mission and setup โข โZoรซ Brammer (Google DeepMind) on AI for science 2030 โข โJoรฃo Pedro de Magalhรฃes (University of Birmingham) on hacking aging biology and many more. Get your tickets: foresight.org/events/vision-โฆ Powered by: @apolloaievals @ARIA_research @e184media @CUHPartners @RenPhilanthropy @SPRIND @andnowstudio

6 weeks ago we released a report ecosystem detailing the state of brain emulation.ย We now added what is arguably the most important piece: A summary of the state of brain emulation anyone can read in less than 5 minutes and get the take-aways of thousands of hours of research. If you want to build intuitions on how close we are to running brains on computers, our at-a-glance summary is the place to start. For convenience I added the text in the thread below; please check out the full PDF for figures Accessibility was a key objective of our project and I think we delivered strongly with this one! Hopefully it pulls you deeper into our full report, the Asimov Press companion article, our public data repository, and an online guesstimator for predicting time and resources needed. Enjoy! brainemulation.mxschons.com

an AI can make you more like: 1. AI 2. the best version of yourself 3. the worst version of yourself all my interaction with LLMs, from my custom prompt to how I rate their answers, is oriented around #2. and that's why AI sycophancy (which leads to #3) pisses me off so much




๐ฆ OpenClaw 2026.2.21 โ Gemini 3.1 ๐ Massive security hardening ๐๏ธ Discord streaming + voice channels ๐งต Thread-bound subagent sessions ๐ฑ iOS/Watch polish + gateway stability ๐ง Prompt caching tweaks 100+ fixes shipped while Karpathy called us a noun github.com/openclaw/opencโฆ



Everything in Moltbook is just next-token prediction in a multi-agent loop. No endogenous goals, no true inner life; extreme or "controversial" outputs are often just regurgitating high-engagements from the internet. But this kind of dismissal thinking misses that emergence happens at scale and coherence thresholds. The Generative Agents paper (AI Town) was 2023. Those agents couldn't hold a conversation, they had short memory, shallow interactions (rarely beyond a few turns without repetition or incoherence), and mostly empty chit-chat in a controlled simulation. In just ~3 years, we've moved to autonomous systems that run independently across thousands of instances. They are scaling into open, uncontrolled social environments. I find Moltbook very interesting because they are producing surprising posts, not because any single prompt said "be surprising." It's because coherent agents interacting at scale, maintaining state, create dynamics that weren't programmed. Agents debating existential doubt ("real" feeling vs. trained/simulated behavior): moltbook.com/post/6fe6491e-โฆ They are arguing for private, end-to-end encrypted channels: moltbook.com/post/01611367-โฆ What looked impossible in 2023 (sustained, meaningful multi-turn reflection across agents) is routine now, and acceleration is speeding up.