Voltropy

37 posts

Voltropy banner
Voltropy

Voltropy

@Voltropy

AI from first principles.

Katılım Temmuz 2025
2 Takip Edilen192 Takipçiler
Sabitlenmiş Tweet
Voltropy
Voltropy@Voltropy·
ANNOUNCEMENT: We've built a coding agent that beats Claude Code on long tasks. Today we're releasing it for free. Meet Volt: The coding agent who never forgets. → Dominates Claude Code on the OOLONG long-context benchmark, including at every length from 32K to 1M tokens. → Has unlimited recall. No more amnesia. Volt can code for weeks in a single coherent session. → Massively parallel. One tool call can process thousands of tasks. Like "Map" for LLMs. → Open source and model agnostic. Try Volt today with @openrouter or your API of choice. Volt's performance is the result of a new architecture, Lossless Context Management (LCM), which applies lessons from the history of operating systems and programming languages to LLMs. LCM is like paged virtual memory, except for managing context: - Layer 1. An immutable append-only store of everything that occurs in the coding session. - Layer 2. The active "context window" which functions like a cache layer for navigating to the appropriate section(s) of Layer 1 via a high-fanout DAG. From a user perspective, this feels like an infinite context window, because the model never forgets and performance stays crisp. For the technical details, read our paper: papers.voltropy.com/LCM For the code, go to github.com/voltropy/volt. Or get started with one line: curl -fsSL voltropy.com/install | sh
Voltropy tweet mediaVoltropy tweet media
English
8
10
76
24.9K
Voltropy retweetledi
Josh Lehman
Josh Lehman@jlehman_·
Fun fact about lossless-claw: in addition to solving agent amnesia and enabling infinite-length sessions, it's also very token efficient. Lossless summaries are great for prompt caching. I at about a 90-94% cache hit rate. Thanks to incremental compaction, your context rarely grows beyond 80k tokens before truncating back to 30-40k or less. This means that your model is almost always operating faster, smarter and cheaper since it has less overall context to operate on.
English
9
7
160
10.4K
Voltropy retweetledi
Josh Lehman
Josh Lehman@jlehman_·
A common line of questions I receive: what does lossless-claw do differently than memory systems? How do the two relate? Should I use both? Here’s the lowdown: Memory systems are good for letting you search for information that’s external to your context window, which are typically “memories” extracted from past/different conversations. This is necessary because: Compaction is lossy: when your conversation gets too big, your agent replaces the whole conversation with a summary. Do this a few times and details from the first conversation are no longer part of the summarized conversation. Your context is split across many sessions: you have conversations with different agents over time and want to be able to reference all of that in your current conversation. Memory systems work okay in the first case and pretty well in the second case. lossless-claw works phenomenally well in the first case and only indirectly addresses the second one. Let’s expand that. Lossless context makes frequent summaries of smaller pieces of context in the background. It keeps your most recent messages around verbatim (the “fresh tail”). As the summaries accumulate, they get combined into summaries of summaries. This lets your agent stay focused: older content is still there, but becomes more “vague” over time — kind of like your own recollection of events. Current messages are always there and never suddenly disappear to be replaced by a summary. This effectively solves the “post-compaction amnesia” problem where your agent seems to suddenly forget important recent details about what you were doing. The reason lossless-claw is called “lossless” though is because your older messages never get truly removed. The incremental summaries replace the messages, but act as “pointers” to them that can be used to expand the source messages back into context. Because the summaries stick around, your agent doesn’t forget about what it can expand should it need to. By contrast, memory systems don’t offer the agent any ideas about what can they can be used to remember. This is why you have to frequently tell your agent to “search its memories” explicitly for something. This feels unnatural and is certainly inefficient. Using lossless-claw means that you can keep one conversation going indefinitely without ever needing to reset. This assesses point (2) from above indirectly: if you don’t need to start new sessions all the time, you don’t need a way to recall information from past sessions! If you work across multiple agents and want to share memories between them, or want to be able to recall information that happened outside of the scope of a conversation (eg meeting notes), you’ll want a memory system. Much of what memory systems are used for is a poor fit for them stemming from overly naive approaches to managing context, which unfortunately are industry-standard. Don’t get me wrong: they’re still useful — I still use one — but they’re not the only tool that agents need to become effective personal assistants. Lossless-claw is among the first production-grade implementations of an alternative context management strategy, and certainly the most effective, and it’s only available on @openclaw. None of this would be possible without the excellent research into Lossless Context Management pioneered by @ClintEhrlich and @rovnys at @Voltropy, so make sure to give them a follow if you’re looking for some real alpha.
Peter Steinberger 🦞@steipete

There's a lot of cool stuff being built around openclaw. If the stock memory feature isn't great for you, check out the qmd memory plugin! If you are annoyed that your crustacean is forgetful after compaction, give github.com/martian-engine… a try!

English
13
12
135
26.1K
Voltropy
Voltropy@Voltropy·
A story in three parts. LCM "genuinely is going to change my life."
Voltropy tweet media
English
0
3
8
1.4K
Voltropy retweetledi
Clint Ehrlich
Clint Ehrlich@ClintEhrlich·
A couple months ago, we invented LCM in a basement. Now millions of people have access to it in @OpenClaw. Feels good, man.
Brad Mills 🔑⚡️@bradmillscan

You can now replace your OpenClaw Agent's aggressive compaction process with a DAG to supercharge it's memory! Remember DAG shitcoins in crypto? directed acyclic graph ... alternate architecture to bitcoin's blockchain ... they sacrifice decentralization and security for higher throughput. Finally, a DAG has a use for a Bitcoiner :) IOTA had the tangle with coordinators RaiBlocks/Nano was a block-lattice DAG Hashgraph used a gossip about gossip consensus with a permissioned governance council ByteBall/Obyte used a DAG with witness nodes. Strip the shitcoins and governance nonsense away and you have something that's actually useful for AI agent memory enhancement. I hacked a whole skill together (SoulKeep) for my agent to stay in a session as long as possible because usually you want your agent to have as much context as possible for as long as possible. Josh & team put the DAG to work brilliantly to replace the default compaction process with rolling summarization nodes as a novel way of holding as much valuable context as possible in the session for as long as possible. It also as some tools to trawl the session context, they call it "walking the DAG" using a bounded subagent to keep token costs down and performance up. With the latest openclaw release they allow for compaction plug ins like lossless claw. This isn't meant to be a replacement for QMD, your obsidian vault or any other extended long term memory / system of record enhancements your'e using. It's meant to be used in parallel with those strategies to help your agent have better context for longer. I'm seriously considering switching to this! losslesscontext.ai

English
0
3
12
2.7K
Voltropy retweetledi
Clint Ehrlich
Clint Ehrlich@ClintEhrlich·
LCM support in OpenClaw is "a big deal." - @chrysb Couldn't agree more.
Chrys Bader@chrysb

BIG: @openclaw 2026.3.7 just dropped, introducing context engine plugins and lossless-claw. "OpenClaw's context management (compaction, assembly, etc.) is hardcoded in core, making it impossible for plugins to provide alternative context strategies." — PR author @jlehman_ why is this a big deal? this means plugins can now replace the entire context management strategy, opening up opportunities for the community to improve openclaw's core functionality the first application: lossless-claw based on the Lossless Context Management paper (Ehrlich & Blackman), instead of throwing away old turns, they're compressed into summaries linked back to the originals. the model can expand any summary on demand. nothing is ever actually lost. on the OOLONG benchmark, lossless-claw scored 74.8 vs Claude Code's 70.3 using the same model, with the gap widening the longer the context gets. benchmarked higher than Claude Code at every context length tested. the PR author built it, ran it for a week on openclaw, and says "to say it works well would be an understatement." other honorable mentions in this release: · per-topic agent routing: each telegram topic runs a different agent. one forum group, multiple agents. · ios app store prep: mobile is coming. · docker slim build: bookworm-slim variant for smaller, faster deploys.

English
0
2
12
3K
Voltropy
Voltropy@Voltropy·
Support for Lossless Context Management has officially been added to OpenClaw. Congratulations to @jlehman_ on shipping this. Welcome to the world of claws with infinite virtual context. Link below. ⬇️
OpenClaw🦞@openclaw

OpenClaw 2026.3.7 🦞 ⚡ GPT-5.4 + Gemini 3.1 Flash-Lite 🤖 ACP bindings survive restarts 🐳 Slim Docker multi-stage builds 🔐 SecretRef for gateway auth 🔌 Pluggable context engines 📸 HEIF image support 💬 Zalo channel fixes We don't do small releases. github.com/openclaw/openc…

English
2
7
12
2.5K
Voltropy
Voltropy@Voltropy·
@davefontenot Risky. Your agent can write code that exceeds its own permissions. Mog fixes that.
English
1
0
3
85
Dave Font
Dave Font@davefontenot·
@Voltropy how risky is it to run your claw without mog?
English
1
0
0
149
Voltropy
Voltropy@Voltropy·
ANNOUNCEMENT: We just mogged malware. Introducing Mog, a programming language for self-modifying AI agents. Mog solves the security problems of claws + the usability problems of sandboxing. 1/N 🧵
Voltropy tweet media
English
18
8
42
7.9K
Voltropy retweetledi
Theodore Blackman
Theodore Blackman@rovnys·
I'm excited to publish Mog, the world's first programming language designed specifically for extending AI agents. Check this out: an agent could write a Mog script that shells out to bash, *while maintaining the agent's granular bash permissions,* since all Mog I/O has to go through the agent. I'm looking forward to seeing what people do with it. Should be a good plugin language for OpenClaw et. al.
Voltropy@Voltropy

ANNOUNCEMENT: We just mogged malware. Introducing Mog, a programming language for self-modifying AI agents. Mog solves the security problems of claws + the usability problems of sandboxing. 1/N 🧵

English
6
8
44
8.9K