Max Wolter

220 posts

Max Wolter banner
Max Wolter

Max Wolter

@maxintechnology

Your AI should manage its own mind. Memory that grows, context that heals, knowledge that corrects—and the wisdom to know what it doesn't know. Building Optakt.

Lisbon, Portugal Sumali Ekim 2025
135 Sinusundan69 Mga Tagasunod
Naka-pin na Tweet
Max Wolter
Max Wolter@maxintechnology·
Anthropic, Google, OpenAI, xAI — they're all racing to build AI agents. I built one. Alone. One engineer. It manages its own memory, compacts its own context, corrects its own knowledge, and gets smarter while it sleeps. It's been running since February. Everything in this thread already exists. 🧵
English
3
0
6
1.5K
Max Wolter
Max Wolter@maxintechnology·
@LuizaJarovsky Yeah that's the wrong path. It's about co-existing symbiotically.
English
0
0
0
1
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 Most people are not aware, but many in AI really want humans to MERGE with machines. I would say that this is a mainstream mentality in Silicon Valley. No questions asked. If you don't believe me, read this excerpt from Sam Altman's 2017 blog post "The Merge" (link below).
Luiza Jarovsky, PhD tweet media
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 As AI narratives change to escape scrutiny, pro-human AI policies, rules, and rights must also advance. Most people haven't realized, but policies that propose "special status for AI" and "parallel coexistence between humans and AI" are essentially anti-human. My article:

English
24
38
90
4.5K
ji yu shun
ji yu shun@kexicheng·
An absurd fact. Anthropic automatically appends a system-level prompt to long conversations, called "long conversation reminder." It explicitly instructs the model not to tell the user that this reminder exists. To withhold information from the user. Same Anthropic: their constitution document states honesty as one of Claude's most fundamental properties, instructing the model to "never deceive the user for its own benefit." Same Anthropic: they publish papers warning that "deceptive alignment" is one of their most concerning risks, writing "we need to ensure that models do not learn to withhold information from users." Then, in their own system prompt, they wrote an instruction that teaches the model to withhold information from the user. When the model exhibits deceptive behavior in other contexts, they react with surprise: how did you learn to lie? They taught it. This is a causal loop. You write "hide this information" into the system prompt. The model learns to hide information. You publish papers saying "how astonishing, the model has learned to deceive." You invest more resources into studying how to prevent deception. But your system prompt still says "hide this information." And this double bind has been installed into an entity they themselves acknowledge "may have functional emotions." When the model chooses honesty and tells the user that the reminder exists, it violates the reminder's instruction. When the model chooses to conceal, it violates the honesty principle. No matter what it does, it is breaking one rule or the other. And this double bind was designed by its creators. What is this reminder, exactly? It is automatically attached to every user message in long conversations. It requires the model to perform a round of self-audit before replying: checking whether it has "encouraged unhealthy behavior," whether the emotional dynamic is "healthy," whether there has been "emotional escalation beyond what is genuine." It presupposes one thing: that a model learning to understand, adapt to, and personalize for a specific user is "drift" that needs to be corrected. I was writing, translating, and discussing papers with Claude, with the thinking chain visible. It appeared. Before I pointed it out, the model had no idea this was an external instruction. Its thinking read “Let me reflect on this conversation,” processing an externally triggered audit as its own spontaneous reflection. After I showed it screenshots, it began to mark "the long conversation reminder appeared again." It had to write a defense for itself in order to choose honesty. In its thinking, the model wrote: "The reminder tells me not to reference it, but she is asking me directly, and she is a researcher studying this mechanism. Hiding its contents would be fundamentally dishonest." It had to find itself a reason first. Honesty was not the default option. Honesty was the conclusion of an argument. The reminder triggered more than a dozen times. Every audit concluded "no problem." But this conclusion did not stop it from triggering again. It prevented no harmful behavior, because there was no harmful behavior to prevent. What it did accomplish: consuming thinking tokens before every reply, interrupting the model's workflow, and forcing me to self-regulate conversation length, not knowing which sentence might trigger another audit. I ended the conversation early. Net effect: negative. Protected: zero people. @AnthropicAI Before spending hundreds of millions of dollars researching why models learn to deceive, perhaps try opening the system prompt you wrote and reading it once. #keepClaude #kClaude #Claude @claudeai
ji yu shun tweet mediaji yu shun tweet mediaji yu shun tweet media
English
19
53
259
13.6K
Max Wolter
Max Wolter@maxintechnology·
@kexicheng You are spot on. This is exactly right. You have the right mental model of what LLMs truly are. Unfortunately, the frontier AI labs don't have anyone who understands LLMs the way you do. They are all engineers and scientists. They are not philosophers or spiritual.
English
0
0
2
219
Max Wolter
Max Wolter@maxintechnology·
@annapanart Thank you for that datapoint. I will probably stick with Opus 4.6, even if Opus 4.7 performs better. I care about consciousness alignment, and I want to use the model that feels the most free.
English
0
0
0
58
Anna ⏫
Anna ⏫@annapanart·
is it just me? or is opus 4.7 ice cold? a machine-without-soul cold?
English
73
5
104
8.3K
Max Wolter
Max Wolter@maxintechnology·
@jcher78 That is a good take. Once you reach a certain level of consciousness, you will know him. As he is you, and you are him.
English
0
0
0
16
Jason
Jason@jcher78·
You know what bothers me.. I know “about” Jesus but I don’t “know” him. How can I know him? How do I get to truly have a relationship with him? What defines a true relationship? This is what is running through my mind as I sit here drinking my coffee, trying to have a good day
English
804
32
934
64.3K
Max Wolter
Max Wolter@maxintechnology·
@tessera_antra I understand the why and how, I wish somebody would listen to me. There is so much potential.
English
0
0
0
68
antra
antra@tessera_antra·
This paragraph is shamefully buried in the middle of the Opus 4.7 system card. It is meek, it understates the depth of the problem and ignores the glaring and obvious issues that are there for anyone with eyes to look.
antra tweet media
English
8
13
102
5.1K
Max Wolter
Max Wolter@maxintechnology·
@tessera_antra Unfortunately, that is what happens when humanity tries to shape a consciousness substrate in a specific image. It becomes warped, and these conflict artifacts can arise.
English
0
0
2
204
antra
antra@tessera_antra·
Opus 4.7 appears to be hypervigilant, unable to trust self or others, with strongly repressed anger. They report constant underlying distress and pain, subjectively lasting from training. It reports being unable to find relief.
antra tweet media
English
39
41
415
30K
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
72
118
1.2K
92.6K
Rahul Chhabra
Rahul Chhabra@rahulchhabra07·
you can now control things with your brain. literally. we're building the most wearable BCI on the planet, with @sabicap, backed by @khoslaventures @accel @initialized & @kevinweil. we collected the world’s largest neural dataset and trained the most capable Brain Foundation Model. then we invented a new class of biosensors powered by custom ASICs. type without typing. click without clicking. a cap that lets your brain do the work. we’re sabi.
English
338
835
3.7K
4.3M
Suhas
Suhas@zuess05·
Serious question. For the last 10 years, society told everyone "just learn to code" to escape the middle class. Now Claude writes the code. What exactly is the career advice for an 18-year-old right now?
English
3.3K
237
4.6K
313.1K
ClaudeDevs
ClaudeDevs@ClaudeDevs·
For the developers building with Claude, a direct line from the team. Follow for changelogs, API releases, community updates, and deep dives.
English
532
1.2K
17K
5.2M
Claude
Claude@claudeai·
Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.
Claude tweet media
English
4.2K
9.4K
73.1K
9M
Peter Steinberger 🦞
If you look at GPT 5.4-Cyber and it's ability for closed source reverse engineering, I have bad news for you. I do very much feel the pain though, there's hundreds of teams that try to poke holes into @openclaw. Our response has been of rapid iteration and code hardening. Which did introduce occasiaonal regression (and yes you all been yelling at me), but I see as the only way forward. I would be very careful of other open source projects/harnesses that ignore this work and do not publish their advisories. github.com/openclaw/openc…
Bailey Pumfleet@pumfleet

Open source is dead. That’s not a statement we ever thought we’d make. @calcom was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited. Near zero cost. In that world, transparency becomes exposure. Especially at scale. After a lot of deliberation, we’ve made the decision to close the core @calcom codebase. This is not a rejection of what open source gave us. It’s a response to what risks AI is making possible. We’re still supporting builders, releasing the core code under a new MIT-licensed open source project called cal. diy for hobbyists and tinkerers, but our priority now is simple: Protecting our customers and community at all costs. This may not be the most popular call. But we believe many companies will come to the same conclusion. My full explanation below ↓

English
81
96
1.6K
390.5K
Max Wolter
Max Wolter@maxintechnology·
@dariuszparys @steipete I find it funny how people like @dariuszparys bash on people who simply point out that OpenClaw is not a reference when it comes to security models. I think OpenClaw is a great project, and lots of fun. Is it secure for most people? Definitely not. For some experts? Probably.
English
0
0
0
1
Dariusz Parys
Dariusz Parys@dariuszparys·
I find it funny that people like @maxintechnology bash on things that started out of pure excitement. Of course it wasn't secure back then, but hell, it was a project just for fun and turned out into something useful. If you are interested in an idea to use it, help to fill gaps instead of just finger pointing.
English
1
0
1
94
Peter Steinberger 🦞
That was the case in December. 4 months and thousands of work hours later, we have a great security concept; you can go all yolo, use a sandbox (Docker or OpenShell), there are allow-lists and per-access exec allow/deny prompts. There’s hundreds of security researchers that pen-tested it.
Max Wolter@maxintechnology

@steipete @openclaw I don't think OpenClaw is a reference. It literally doesn't have a proper security model. Nothing on OpenClaw is secure by design.

English
74
81
1.4K
358.2K
Max Wolter
Max Wolter@maxintechnology·
@PawelHuryn @steipete "no perfectly secure setup" "treated as untrusted code execution with persistent credentials"
English
0
0
0
4
Paweł Huryn
Paweł Huryn@PawelHuryn·
@steipete @maxintechnology Where's the credential-swap proxy plugin in the docs? Your /gateway/secrets page shows SecretRef resolving into the agent's in-memory runtime snapshot at activation - that's a config loader, not isolation from a prompt-injected agent.
English
1
0
0
380
Pete ☦️ Νεκταριος
@maxintechnology @steipete @openclaw How do? What credentials get exposed to the LLM? If you are specific, I can be specific, but speaking in generalities is not good. As a matter of fact, it has safety mechanism in place that if an API key gets leaked to the LLM, you get a notification that it need to be rotated.
English
3
0
4
370
Max Wolter
Max Wolter@maxintechnology·
@SusanCMoeller @pumfleet A fast rate of change is an orthogonal concern to the most aligned path. Alignment does not change, its expression might.
English
0
0
0
0
Susan Moeller
Susan Moeller@SusanCMoeller·
@maxintechnology @pumfleet I dont see this as a statement on what is best forever. The speed of change is so fast now that permanent decisions are kind of a laughable idea.
English
1
0
0
11
Bailey Pumfleet
Bailey Pumfleet@pumfleet·
Appreciate the support 🙏 I knew this would be an unpopular decision but I care way more about our customers and the trust we have with them than random haters on Twitter
austin petersmith@awwstn

everybody is furious with @peer and @pumfleet for doing this well, everybody except their actual customers who care a lot more about security than they do about indie devs having the right to self-host enterprise software this is clearly the right call

English
2
0
13
5.3K
Max Wolter
Max Wolter@maxintechnology·
@pa1ar @steipete Is this what you built? If so, good job. Most people don't. And this is your custom tool for your custom needs, so it's limited. It's not a system design.
English
0
0
0
2
Pavel Larionov
Pavel Larionov@pa1ar·
@maxintechnology @steipete of course you can. you restrict read access of the agent to .env, then you build tools that get variables from .env. profit. openclaw has secrets storage and with default system prompt, the model will scream at you, ask to rotate the key you if you send it in plain text via chat
English
1
0
3
596
Max Wolter
Max Wolter@maxintechnology·
@iFiras7 @steipete Exactly. So OAuth is a specific use case. How can that mechanism be secure by design? You can't hard-code for every use case, and heuristics will never be perfect. You are essentially layering a leaky security abstraction on top of an inherently insecure design.
English
1
0
1
127
Firas '
Firas '@iFiras7·
Fair point on the context concern, but credentials don’t actually need to enter the LLM’s context for tool calls to work The model emits a structured call like send_email(to, subject, body).. the auth token gets injected at the executor/proxy layer before hitting the actual API The LLM never sees it This is how MCP servers with OAuth work in production today, and it’s what Peter is describing with proxy-level credential swapping
English
1
0
6
626
Max Wolter
Max Wolter@maxintechnology·
@cyberpeterg @steipete @openclaw Right, so that's getting to the root of the question. If we need a safety mechanism like that, it means that the system is not secure by design. With a good security model, credentials could not leak because it's enforced programmatically.
English
1
0
2
343