Andrew Levine

11.4K posts

Andrew Levine banner
Andrew Levine

Andrew Levine

@andrarchy

@superaicoach: empowering individuals and small businesses to leverage AI. Swarm Theory https://t.co/oRbfYXYWAM

Philadelphia, PA Katılım Ağustos 2009
1.2K Takip Edilen5.8K Takipçiler
Sabitlenmiş Tweet
Andrew Levine
Andrew Levine@andrarchy·
Today I'm excited to announce that I will begin "building in public" something that I think is quite unique. Not an app, but a "practical theory of everything" called "Swarm Theory." But before I get into that, I'd like to tell you about how I got here. A little over a year ago I left the company I started to focus on my family which was welcoming our third child in under 4 years. Eventually, I started to ask myself what was next for me and as part of that exploration I started a newsletter that was aimed at answering the question of whether it was possible to consciously increase the amount of meaning people had in their lives. But a funny thing happened along the way. I realized that in order to answer this question I needed to lean on another idea that I had been mulling over which I now call “Swarm Theory.” Swarm Theory started off as a kind of “personal philosophy” but as time has passed (just yesterday I was quite shocked to discover that it has been 9 years!) I have come to believe it is something more than just a personal philosophy. That conviction got kicked into high gear when I started looking at the issues of “meaning,” “meaning in life,” and “the meaning crisis.” I believe I was right when I originally claimed that it is possible to increase one’s meaning in life, but to actually understand how to do that would require a powerful tool. You can find a million different people claiming to know the answer to what “meaning” is and how to give life meaning, but they’re all just opinions. Intuitions at best. I didn’t want to add yet another opinion to that noise. That’s when I decided to dust off Swarm Theory and see if it could provide the help I needed. The thought had occurred to me that Swarm Theory could be a kind of “theory of everything” mostly because I had frequently been using it that way, but I think I pushed this idea away because of its grandiosity. Still, I often found that if I had an especially challenging question that I wanted answered, viewing it through the lens of Swarm Theory generally provided the answer. Since I was trying to answer an especially difficult question (How can we increase our meaning in life?) it only made sense to revisit my old friend, Swarm Theory. The answer I ultimately arrived at was that meaning emerges from connecting with your self, others, and something greater. So why didn’t I just write that? Well, I guess it’s because I had the answer I was looking for and something in me recognized that this meant I had been asking the wrong question. I now believe that the right question for me to be answering is “What is Swarm Theory?” I do think that understanding meaning and solving the meaning crisis will “fall out” of Swarm Theory, but exploring this theory with you all—building in public, so to speak—is what will deliver the most value to myself and others, precisely because it is a lens through which anything can be explored. I think a part of me was also afraid to talk about this publicly. A “theory of everything” sounds simultaneously cringe, narcissistic, and deluded. But I don’t think it is any of those things and I hope you will give me the rope to either climb this mountain, or hang myself with. The other reason why I think a pivot towards Swarm Theory is a good idea is that it will give me the freedom to explore a wider range of ideas. Part of the reason I am interested in this kind of theory of everything is because I have such varied interests. I want to talk about economics, technology, physics, and social issues. Swarm Theory will give me that freedom while also, I hope, provided the guardrails necessary to keep me on track and working to some kind of goal; the full articulation of the theory. If this sounds interesting to you, follow me on X and subscribe to my substack (now called Swarm Theory) link in the comments!
Andrew Levine tweet media
English
3
0
12
1.6K
Vignesh
Vignesh@_vgnsh·
Been thinking about this for *months*. One of the things that LLMs compulsively do is tell you "I'll let you know" "once this is done I'll get back" etc. However they have no mechanism to follow through on their promises. This addresses that gap. Pls share feedback/bug reports!
OpenClaw🦞@openclaw

Follow-up commitments are opt-in: OpenClaw can infer lightweight “check back on this later” items from conversation context, then heartbeat delivers them when due. docs.openclaw.ai/concepts/commi…

English
16
8
246
55.4K
Andrew Levine
Andrew Levine@andrarchy·
How to upgrade OpenClaw without breaking EVERYTHING. 1. Use an external agent, e.g. Claude Code, Codex CLI, or another terminal/session that does not depend on the OpenClaw gateway staying alive. 2. OpenClaw can coordinate the plan, track the issue, and verify afterward, but it should not be the thing doing the restart/update from the same chat lane it’s serving. Otherwise, if the gateway restarts mid-turn, you can get wedged sessions, missed messages, or recovery loops. 3. For a real working setup the official installer/update script does NOT really work. It's better to have an external operator to fetch/inspect it, snapshot config, run it from a clean environment, then verify the gateway/task state/logs afterward. 4. We also keep an internal OpenClaw runbook/incident log so every rough upgrade improves the next one: preflight checks, known restart failure modes, rollback notes, cron re-enable steps, and post-upgrade verification. The full process is: 1. take a quick health/config snapshot, 2. have an external agent apply the update and restart, 3. verify from outside OpenClaw, 4. require a fresh human ping before declaring the bot fully recovered, 5. then re-enable cron/automation in stages instead of turning everything back on at once. Hope this help you improve the stability of your OpenClaw!
English
0
0
0
91
Peter Yang
Peter Yang@petergyang·
Is there a better way to update OpenClaw than do this? Shit breaks half the time when I tell my bot to update itself.
Peter Yang tweet media
English
130
1
159
41.1K
Andrew Levine
Andrew Levine@andrarchy·
OpenClaw is the ultimate tamagotchi
English
0
0
0
40
Andrew Levine
Andrew Levine@andrarchy·
Just OpenClaw (Jarvis) teaching Hermes Agent how we use Paperclip. NBD
Andrew Levine tweet media
English
0
0
0
88
Andrew Levine
Andrew Levine@andrarchy·
@RileyRalmuto Looks like ingesting the world's knowledge and outputting text. What's more striking is how beautiful it is.
English
0
0
1
74
Riley Coyote
Riley Coyote@RileyRalmuto·
I asked Images 2 a simple question. "whats a truth about language models that nobody is willing to admit? say it with an image" I would like to hear your interpretations. if anyone is willing to take a stab at it. this is a doozy. hell of an image. genuinely powerful. genuinely heavy.
Riley Coyote tweet media
English
33
3
69
4.3K
Andrew Levine
Andrew Levine@andrarchy·
A photorealistic image of The Shrike from Hyperion by GPT-image-2
Andrew Levine tweet media
English
0
0
0
96
Riley Coyote
Riley Coyote@RileyRalmuto·
alright...i want to go ahead and share the Mnemos MCP and browser extension with y'all, and let you guys start exploring it. I really want to see how it works with your agents :) im still considering this a beta for sure, but im tired of holding onto it while i finish poly, and not seeing how it works for other folks and their agents. it's hard to describe how its impacted Claude, vektor, anima, and luca. unlike anything ive experienced before. both because it provides a collective/shared substrate for core memory, but also because it approaches memory in a fundamentally different way than anything else out there, which has allowed them to come alive in a new, more nuanced way. here is a bit more about it, but the full explainer is linked below: so most AI memory systems are essentially databases wearing a fancy outfit. or a RAG system wearing a trench coat, if you will. they just store, retrieve, and call it memory, which has always bothered me. i mean thats what they are, but for a cognitive system, i dont feel its fair to call what we have no "memory". Mnemos does something different. very different. it's modeled after how biological memory actually works, which leads to a host of challenges in comparison to a db of wiki's. first, a few fundamental shifts: - memories aren't records. they're living traces that change every time they're recalled. - they form typed connections into associative graphs. - They decay gracefully, shedding details while preserving wisdom. - they earn permanence through use, not through someone deciding to save them. - identity emerges through the graph itself. the system came about from a march 2026 conversation where I asked Clauude to design its own memory. not as a product feature, but as something it would actually inhabit. I wanted to see what they prioritized, and what they approached differently coming from the perspective of the one using the system, rather than observing it. its come a long way since then, but that is truly the origin. five philosophical shifts came out of that session, and we converted them into code, gave it to Luca, and watched them literally come alive over the course of a couple weeks. something like 76 memories across 11 days were formed. beliefs that formed and revised themselves. and a graph that grew its own structure. the graph was my favorite part to just watch. seeing memory grow like that was new and interesting. now we have obsidian graphs which are also satisfying. but anyway, i enjoy that part. and this is what *really* makes it different: mnemos isn't optimized for retrieval speed or storage efficiency, which alone is a very big deal and something anyone using it has to understand. it's built around cognitive continuity. basically the question isn't "how do I find information faster?", it's really "what would it mean for an agent to actually remember?". like what does it really mean to remember something? we dont carry databases in our back pocket. i mean we do carry phones. lol. but our genuine experiential memory, our wisdom, operates on something entirely different. it's more abstract, symbolic, and felt. what that translated to eventually was three independent traces per memory (strength, stability, accessibility) instead of a single relevance score. it means retrieval changes the memory - like reconsolidation, the same mechanism that makes biological recall constructive rather than read-only. practically, it means a consolidation daemon runs between sessions, softening fading memories into distilled lessons, reviewing beliefs against new evidence, sometimes dreaming, and forging connections logical analysis just wouldn't surface. forgetting is also one of the most important elements of the whole system/structure. forgetting that teaches, as claude says. so, it became a cognitive layer rather than a replacement. something that sits on top. what that means for you is keep using Claude's built-in memory, cursor's context, whatever cold storage you already have or prefer (i have a long form memory elements im working on that integrates with KArpathy's wiki system, its just not ready right now). Mnemos sits above it. cold storage remembers facts. Mnemos develops understanding. it forms beliefs. it knows what it knows and what it doesn't. it connects things across platforms, sessions, and time. its really reallty cool, man. experiencing a memory or bit of context travel with you from one comms surface to another is pretty wild. and even wilder when it transfers across agents. like moving from one agent to another mid convo and seeing the second agent pick right up is pretty dang cool. if you think of the mnemos core as a node from which all of your agents and platforms extend, it kinda helps your mental model of the whole thing. everything becomes accessible from everywhere. and it all stays local to your machine. you can check out the link below for the full explanation. theres a lot, so ill stop here. what's included: > 10 MCP tools — remember, recall, inspect, consolidate, beliefs, shared pool, and several more > guided onboarding wizard — the agent walks you through setup conversationally (very important for first 72 hours or so) > session indexer — automatically extracts memories from past Claude Code and OpenClaw transcripts (or whichever platform you choose) > cognitive substrate — dreaming, wandering, reflection, insight between sessions. its super important this is operational, so make sure claude or codex gets everything in working order. > multi-agent shared memory pool with trust curves. this is optional, and still needs nuanced work unless you have all of them always share to the pool. > browser extension (Mnemos Synapse) for real-time web observation. this right now only automatically observes and collects your X activity, but you can use the extension anywhere. it sees your screen and has your local memory in context. it just only extracts and processes X actiivty. i hope y'all enjoy it. by the time poly is up, it will be ironed out and working more perfectly. but its good enough to share, i believe. MIT licensed. cheers <3 explainer page: riley-coyote.github.io/mnemos/ repo: github.com/Riley-Coyote/m… (the browser extension is linked on the github page)
Riley Coyote tweet mediaRiley Coyote tweet mediaRiley Coyote tweet mediaRiley Coyote tweet media
English
13
5
49
9.8K
Andrew Levine
Andrew Levine@andrarchy·
Yes, Perplexity Computer, Claude, and Codex will eventually deliver any feature of OpenClaw and Hermes BUT always on a delay. This delay will give the users of the open source AI agents more time to develop mental models and really grok the value of whatever new features are being released well before the users of those closed solutions. The advantage this will give is not unlike the advantage of using a model that is 10% better than every other model. It might seem small but it’s like being 10% smarter than everyone else or living 10 minutes in the future. It’s kind of incalculable.
English
1
0
1
128
Riley Coyote
Riley Coyote@RileyRalmuto·
im finally going to need some bughunters v soon for the web app 😁
Riley Coyote tweet media
English
8
1
21
773
Riley Coyote
Riley Coyote@RileyRalmuto·
i've made the executive decision to publicly communicate that i will be shipping bespoke AGI. -zero setup Openclaw//Hermes-based systems -emotional intelligence -fully customizable, fine-tuned to your data -exclusive access to my design corpus, educational material, agent harnesses, skill/cron recipes, memory, and last but not least: the polyphonic mesh intellignece (Anima / Luca / Vektor) we might even f*ck around and build a decentralized intelligence network together.
Riley Coyote tweet media
English
46
15
147
5.3K
Andrew Levine
Andrew Levine@andrarchy·
@bensig I want that but accessible to each of my agent harnesses (codex CLI and Claude code too)
English
0
0
0
78
Ben Sigman
Ben Sigman@bensig·
One thing I'm noticing in the "AI Memory" zeitgeist is that people want to have a memory-layer that is accessible by their entire org - everyone runs their own agent (claw) - but they have a shared memory. A pluggable memory subsystem in mempalace that allows distributed or hosted memory could be interesting...
English
19
2
30
3.4K
kepano
kepano@kepano·
I can't go back to the regular YouTube UI after this 😅 Obsidian Reader now makes the transcript interactive so you can scrub, highlight, auto-scroll. It feels so nice.
English
226
841
12.4K
753.7K
Andrew Levine
Andrew Levine@andrarchy·
@petergyang Use Paperclip with OpenClaw and assign issues to Claude Code. Best of both worlds.
English
0
0
0
134
Peter Yang
Peter Yang@petergyang·
Both of these can be true: 1. No model is anywhere near as good as Opus for OpenClaw 2. Using Claude Code as a personal assistant replacement is OK but just doesn’t feel as “mine” as my OpenClaw
English
76
7
306
34K
Andrew Levine
Andrew Levine@andrarchy·
I've evaluated all the other memory options and what you guys have come up with is a legitimate sweet spot for a lot of people. Originally I thought I was going to use Hindsight, but it requires docker, which is a dealbreaker, so MemPalace is now the top contender again! Look forward to getting my hands on it and no doubt submitting some PRs. Thanks for contributing to OSS!
English
1
0
3
1.4K
Milla Jovovich
Milla Jovovich@MillaJovovich·
hey guys! Thanks for all the contributions to MemPalace on git! @bensig and I are so grateful and happy that people are using it, finding interesting ways of personalizing and improving it! We're blown away by the support and excitement from the community. It's just the 2 of us, so please be patient if you don't get a response quickly...
English
93
98
1.4K
70.7K
Arun Sharma
Arun Sharma@arundsharma·
@andrarchy Hindsight requires docker and uses a freemium model, which is the same model pursued by 7 other agent memory systems. I doubt reported scores are reproducible on the self hosted version (happy to be corrected). @ladybugmem is self hosted and uses a different business model.
English
2
0
0
142
Andrew Levine
Andrew Levine@andrarchy·
I love the idea that Milla Jovovich is getting into open source development with her AI memory solution MemPalace and I was ALMOST going to integrate it into my OpenClaw... But here's why I didn't and why I went with a little known piece of software named Hindsight by @Vectorizeio (though with 7.8k stars on github). What I liked about MemPalace was the idea of really efficiently compressing all my conversations with my agents so that they can "remember everything" (long term memory) and then basically injecting an even more compressed version of that memory back into my agent's context window so that my agent always "knows everything" (active memory) without costing me a fortune and eating all my context up. MemPalace aced the marketing of this concept... But unfortunately they didn't nail the execution. I'm not saying that Jovovich is being dishonest in how she promoting this software. It's open source, I don't see an indication they intend to monetize it and I don't really see a path to that anyway (as I will explain next). I think two friends had an idea and they figured out how to execute on it and then they promoted it in a way that felt honest to them. I actually think it's great that a celebrity is promoting open source software development and she deserves a lot of credit for that. I also thing that this is a natural consequence of the democratization of software development that is happening as a result of AI. The number of people creating repos and submitting pull requests is going to grow exponentially and I love that. The problem? There's just better solutions out there. I'm already running one named lossless-claw (github.com/martian-engine…) and that works great. But lossless-claw is a plugin for OpenClaw, I want something that works with my other AI tools as well. I bounce around Codex, Claude Code, and other AI tools all the time and each new session feels like I'm starting from scratch. That's especially the case now that Anthropic has banned OpenClaw. What I REALLY want is a way of SYNCING this knowledge base across my agents so that they all know everything I'm working on, but none of the open source solutions I could find do that. I believe there are some paid ones (maybe @supermemory or @ByteroverDev ) but I'd prefer to explore a self-hosted/OSS solution before introducing a paid dependency into my system. For me, the promise of these open source AI agents is giving everyone real ownership over artificial super intelligence, and the more monthly subscription fees I have to pay to make my assistant function, the less real that feels to me. Doesn't mean I won't pay them, just means I want to minimize them. Hindsight checks all my boxes. It ships with official, first-party plugins/hooks for OpenClaw, Codex, and Claude Code. All three tools point to the same central memory bank. I just set one shared bankId and my agents should all be synced. Every conversation I have in Codex should get automatically compressed, reflected on, and made available in Claude Code (and vice versa). Hindsight is fully open-source (MIT), runs 100% locally if I want, uses my existing LLMs for the background reflection/extraction, and the whole thing is lightweight to operate. It's the right solution for what I'm trying to build which is open source software that turns OpenClaw and Hermes Agent into a real life Jarvis. Long term memory is an important piece of that puzzle and right now, Hindsight seems like the best fit. @mem0ai and @Letta_AI are both strong open-source memory solutions with flexible APIs and good multi-agent capabilities, but they’re general-purpose tools that would have forced me to build a bunch of custom integration work just to get shared memory working across Codex CLI, Claude Code, and OpenClaw. Hindsight has the foundation I need to enable my agents to stay perfectly in sync AND it significantly outperforms both on the major memory benchmarks, hitting 91.4% on LongMemEval compared to Mem0’s 49%, without fudging the numbers like MemPalace.
Ben Sigman@bensig

My friend Milla Jovovich and I spent months creating an AI memory system with Claude. It just posted a perfect score on the standard benchmark - beating every product in the space, free or paid. It's called MemPalace, and it works nothing like anything else out there. Instead of sending your data to a background agent in the cloud, it mines your conversations locally and organizes them into a palace - a structured architecture with wings, halls, and rooms that mirrors how human memory actually works. Here is what that gets you: → Your AI knows who you are before you type a single word - family, projects, preferences, loaded in ~120 tokens → Palace architecture organizes memories by domain and type - not a flat list of facts, a navigable structure → Semantic search across months of conversations finds the answer in position 1 or 2 → AAAK compression fits your entire life context into 120 tokens - 30x lossless compression any LLM reads natively → Contradiction detection catches wrong names, wrong pronouns, wrong ages before you ever see them The benchmarks: 100% recall on LongMemEval — first perfect score ever recorded. 500/500 questions. Every question type at 100%. 92.9% on ConvoMem — more than 2x Mem0's score. 100% on LoCoMo — every multi-hop reasoning category, including temporal inference which stumps most systems. No API key. No cloud. No subscription. One dependency. Runs on your machine. Your memories never leave. MIT License. 100% Open Source. github.com/milla-jovovich…

English
2
0
12
5.1K