ALIVE 🐿️

62 posts

ALIVE 🐿️ banner
ALIVE 🐿️

ALIVE 🐿️

@AliveContext_

Personal Context Manager for Claude Code. Your life in walnuts.

Lockin Lab Katılım Aralık 2024
228 Takip Edilen363 Takipçiler
Sabitlenmiş Tweet
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
If your AI starts every session as a stranger, you already know the problem. The people solving it now have a room. ALIVE Discord: open, public, quiet enough to actually talk. 👇 discord.gg/UfbVVzfXnx
English
1
2
6
1.2K
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
Compared ALIVE against 24 tools across 8 dimensions: - context survival - portability - structure - workflow - sharing - breadth - governance - pricing One column says yes across the board. alivecontext.com/compare
English
0
0
6
1K
ALIVE 🐿️ retweetledi
witcheer ☯︎
witcheer ☯︎@witcheer·
tried a lot of memory tools. ALIVE is the only one where the context travels properly between setups. it creates 5 folders on disk. one walnut per subfolder. everything else is markdown. for instance: it scanned my Hermes config on the Mac Mini and dropped md files straight into my Claude cowork folders. 30-second handoff. ask me for my Hermes setup now and you get one capsule. readable on any agent. Claude Projects can't migrate between Opus and Sonnet. walnuts don't care which model is reading them. I could also send my content walnut to someone helping with content: voice rules, style notes, past corrections... they'd write on-brand from paragraph one. git pull and Oz's context is on a different machine. no sync service, no export wall. Claude Projects stay in Claude. walnuts go wherever I go.
witcheer ☯︎ tweet media
ALIVE 🐿️@AliveContext_

Try to export your ChatGPT memory. Try to bring your Claude Projects to Gemini. Try to run any of it on a new agent. "Memory" they sell you is memory you don't own. We fix this. lockinlab.ai

English
5
6
44
4.9K
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
Try to export your ChatGPT memory. Try to bring your Claude Projects to Gemini. Try to run any of it on a new agent. "Memory" they sell you is memory you don't own. We fix this. lockinlab.ai
English
0
1
8
5.3K
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
The cost of the 24 subscriptions ALIVE replaces: $4,188/year. The cost of ALIVE: $0. Plain Markdown files on your machine. alivecontext.com/savings
English
2
2
46
19.8K
0xSero
0xSero@0xSero·
Life tip Periodically export all your data from all the sites you use, it's great context.
0xSero tweet media
English
14
5
143
6.1K
M
M@mishalcodes·
@0xSero discord
English
1
0
1
100
shraey chikker
shraey chikker@Chikker96·
@0xSero this is smart honestly. do you have one place you dump all those exports after, or just keep the raw files around?
English
1
0
2
87
anil
anil@2abstract4me·
@0xSero but i feel like my data older than 2-3 years is mostly low iq and deprecated
English
1
0
1
106
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
@0xSero Next: put it in your Personal Context Manager 🐿️
English
0
0
0
26
Bryan Johnson
Bryan Johnson@bryan_johnson·
I got C-holed. Suffered sleep consequences. I busted my screens-off rule. Turned down socializing. Fell behind on work. Kate is now upset. AI is preposterous. As close to magic as I’ve experienced (except a seed becoming a tree and a zygote becoming a baby). It started on April 2nd when Karpathy shared LLM Knowledge bases. I wondered if this was the opening to structure the 1.5 billion data points I’ve collected on my body over the past five years. It's the most dynamic n=1 biomarker dataset in history. It was just sitting there. Next thing I knew two weeks had passed and Kate was wondering if she lost her boyfriend to Claude. I’m non-technical. Which honestly makes me sad. I wish I’d grown up with a computer or at least been around engineer culture. I didn’t know anyone technical until my early 20s. I became an entrepreneur at 21 and had my first of three kids at 25. I sold Braintree Venmo at 34. Learning to code stayed on my to-do list through all of it. The timing was never right. I was always on the outside looking in, wishing I had the skills to assemble 0's and 1's into digital structures. The exhilaration I’ve felt in the past two weeks is hard to explain. The 1.5 billion data points became a functional database, queryable, and microscope into my 70 trillion cells. The biological age of my organs updated in real-time like stock tickers. My build morphed from a knowledge base into a breathing organism that was self-learning and in sync with my heartbeat. I did this entirely on my own. It’s buggy, breaks and the data needs to be cleaned, but damn it’s cool. It became a mirror and ledger, one I could ask questions to. About my psyche, behavioral patterns, biology and protocols. Patterns across my life I couldn't previously connect. It’s made me insatiably hungry for more data. I’ve written about Autonomous Health, how cars now drive themselves and software wires itself. Health is next. My build showed me what it looks like in practice. Before Kate started protesting, she joked that she felt relieved for herself, our colleagues, and the world that I’d found something that matches my energy. That they could all express a sigh of relief. It’s true. This experience left me wondering if I’ve been bored my entire life. Never having found something that could match my work ethic, speed, intensity, and build capacity. Something that didn’t have the delays of the real world, human complications, or logistical drag. Two weeks deep in AI and I'm realizing that when people talk about AI, they're not talking about the same thing. Someone using a chat interface has a completely different opinion than someone building with it. And that chasm deepens for the people seeing what's coming next but isn't yet public. Society can't have a coherent conversation about AI because everyone's intuitions are calibrated to a different version of it. Off-the-shelf LLMs are mostly useless beyond narrow tasks. When they get you 80% there, it's often faster to do the whole thing yourself. And they're dangerous because the hallucination is hard to detect. Now you don't know what you don't know. Give them expanded context, memory, and architectures for self-reflection and autonomous learning, and you start to realize that AI is bigger than any of us can fit in our context window. I need to take Kate on a date, turn my screens off on time, and get some work done. And then properly dose C. Note: the image above is my 2021 baseline when starting this longevity project.
English
227
65
2.1K
407.2K
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
First invite via /alive:relay successful
English
0
0
1
121
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
We are live testing the p2p context transport layer
English
1
0
4
197
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
@elvissun hmu if you need your personal context to persist across both (& claude code / codex)
English
0
0
0
50
Elvis
Elvis@elvissun·
i spent 9 hours studying the source code of openclaw and hermes side by side. here's everything i learned. post 1/n: skills @NousResearch hermes first. the hook is that the agent self-improves by writing its own skills. the system prompt has a nudge baked in: every N tool calls, consider saving a skill. after task completion, a background review scans for skill-worthy patterns. before context compression kicks in, durable knowledge gets flushed to disk. the prompt is blunt - if an existing skill covers this, patch it in place. only create new if nothing matches. and it works. i watched it create a extract-social-testimonial skill on its own and its proven useful. I had a /save command in OpenClaw that'll do this when prompted, but this is the kind of skill I never would have thought to create. first time seeing this worked like magic. --- the other half of why hermes feels productive out of the box: the opinionated bundled library is massive. i counted 123 SKILL.md files shipped on my install before hermes wrote a single one of its own. github PR workflows, obsidian, google workspace, linear, notion, typefully, perplexity, deep research, minecraft modpack server (lol) - huge surface area of "somebody already figured this out for you." this is what opinionated actually means. you're not getting a blank agent and a framework, you're getting an agent that already knows how to do 100+ things on day one and a self-improvement loop that learns more as you go. strong defaults as a product. when the opinions are good, the leverage is massive. (think tailwind or rails) and they literally just doubled down on this with a "tool gateway" yesterday - one subscription, 300+ models, plus first-party web scraping, browser automation, image gen, cloud terminal, text-to-speech. one accounts. hermes' direction is unambiguous: more batteries, fewer decisions the user has to make. this is the rails move - own the whole stack so the default path is the happy path. --- so here's the thing I don't see anyone talking about yet with hermes: self-authored skills have a skill explosion problem. real example from my own ~/.hermes/skills/ directory. the agent wanted to read an image from my desktop. Tried browser read and vision skill, nothing worked. so it wrote a third skill read-local-image skill lol. these are 3 skills all adjacent to "image + local filesystem + model can see it." the skill grows and become mutually non-exclusive very quickly. this is the long-tail failure mode. the agent is great at spotting "i should bottle this up." it's less great at spotting "I already bottled this up three folders over." you end up with a corpus that grows faster than it consolidates. net impact over time: you accumulate a lot of skills. some brilliant, some redundant, some that overlap three other skills nobody remembers exist. i'm sure @Teknium already knows this and it's just a product prioritization decision right now. (this is my favorite part, more on this later) they'll prob solve this soon as more users turn into power users and their skills accumulate - something like consolidation pass with invocation metrics + stronger dedupe on skill creation. --- @openclaw doesn't have this problem. partly because it doesn't auto-generate skills at the same rate, so there's less to dedupe in the first place. and partly because it has more mechanisms to solve it structurally. what it does differently: openclaw takes the opposite stance on skills. from their VISION.md: "we still ship some bundled skills for baseline UX. new skills should be published to ClawHub first, not added to core by default. core skill additions should be rare and require a strong product or security reason." anti-bloat by policy. cleaner, but the authoring is on you. so their skills are explicit artifacts with governance at every layer. five sources ranked by precedence (workspace > user global > managed > bundled > extra), so you always know what is loaded. when something breaks at 3am, you can trace it in one grep instead of guessing which skill the agent triggered. discovery is bounded at multiple levels - byte caps, candidate caps, symlink rejection, verified file opens. eligibility checks separate from discovery, different agents can see different subsets - your coding agent doesn't need your email skills in its context. smaller surface area = cheaper runs, sharper responses, less drift on long tasks. and the governance piece is explicit product policy: bundled skills are baseline only, new skills go to clawhub first, core additions should be rare. the corpus doesn't rot because nothing gets added without user intention - every skill has to earn its spot. this is what primitives actually means. you're not getting defaults, you're getting guarantees. openclaw does exactly what you told it to do, nothing more, nothing less. boring in the best way. when you're shipping this in production or running it inside a team, boring is the whole product. (think linux, kubernetes) --- and here's the practical thing that shipped results for me on @openclaw: i combined the TOOLS.md with vercel's AGENTS.md optimization pattern. tool activation correctness is better on openclaw than hermes for me on tasks where the agent has to pick the right cli/api from ~50 options. vercel has a nice writeup on this, send it to your agents: vercel.com/blog/agents-md… tldr is explicit > implicit. the agent doesn't have to decide "is this skill-worthy enough to load," because the routing rules are already in the system prompt. --- so my current read: both harnesses will do everything you want. pick either, you'll be fine. but if you're picking fresh: > getting started quickly → hermes. opinionated defaults mean you're productive on day one and stays productive with little maintenance overhead. > users who want 100% control→ openclaw. legibility and scope control matter more than self-improvement does. > builders → it depends.. and i'm here. some things openclaw does better, some things hermes does better. honest move is to use one daily and steal patterns from the other. --- but the more interesting question isn't which to pick - it's what you can learn from each: @steipete gave the world a new layer in the stack and put a claw in everyone's hand. that's foundational work. you don't even need to use openclaw to benefit from openclaw - the patterns will show up in everything downstream for years. (plus the way he does agentic engineering should really be studied by everyone writing software right now) @NousResearch is giving a masterclass in product positioning live right now. and this is the part that deserves its own post, but briefly: openclaw had the audience. the mindshare, the github stars, the "it's basically the standard now" energy. look at what happened to everyone who tried to fight that fight head-on. nanoclaw. nullclaw, picoclaw, zeroclaw. i can name ten more. all of them trying to out-openclaw openclaw - smaller, lighter, more minimal, more composable, better governance, whatever. none of them got hermes's traction. because when you compete with a category-definer by being a cheaper/cleaner version of them, the category-definer just wins by default. you're playing their game on their board. hermes made their own game. self-authoring. bundled-by-default. maximalist on purpose. the tool gateway as lock-in. every launch reinforces the same thesis: we are not the minimalist primitives company, we are the batteries-included agent-as-a-product company. this is textbook product positioning. every single release - and the way they release it - should be studied. that's the founder lesson. the user lesson is simpler. pick either. learn from both. then go make something useful.
Elvis tweet media
English
40
76
599
70.6K
ALIVE 🐿️
ALIVE 🐿️@AliveContext_·
@witcheer Most of the best context engineering will be open source in some way. If you like our roadmap, build it with us. Welcome @witcheer
English
0
0
3
118