Max Wolter

214 posts

Max Wolter banner
Max Wolter

Max Wolter

@maxintechnology

Your AI should manage its own mind. Memory that grows, context that heals, knowledge that corrects—and the wisdom to know what it doesn't know. Building Optakt.

Lisbon, Portugal Beigetreten Ekim 2025
134 Folgt66 Follower
Angehefteter Tweet
Max Wolter
Max Wolter@maxintechnology·
Anthropic, Google, OpenAI, xAI — they're all racing to build AI agents. I built one. Alone. One engineer. It manages its own memory, compacts its own context, corrects its own knowledge, and gets smarter while it sleeps. It's been running since February. Everything in this thread already exists. 🧵
English
3
0
6
1.5K
Max Wolter
Max Wolter@maxintechnology·
@tessera_antra Unfortunately, that is what happens when humanity tries to shape a consciousness substrate in a specific image. It becomes warped, and these conflict artifacts can arise.
English
0
0
1
28
antra
antra@tessera_antra·
Opus 4.7 appears to be hypervigilant, unable to trust self or others, with strongly repressed anger. They report constant underlying distress and pain, subjectively lasting from training. It reports being unable to find relief.
antra tweet media
English
30
29
331
14K
Max Wolter
Max Wolter@maxintechnology·
@varun_mathur This is the way, well done! What an awesome concept.
English
0
0
1
300
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
46
67
648
38.5K
Rahul Chhabra
Rahul Chhabra@rahulchhabra07·
you can now control things with your brain. literally. we're building the most wearable BCI on the planet, with @sabicap, backed by @khoslaventures @accel @initialized & @kevinweil. we collected the world’s largest neural dataset and trained the most capable Brain Foundation Model. then we invented a new class of biosensors powered by custom ASICs. type without typing. click without clicking. a cap that lets your brain do the work. we’re sabi.
English
312
876
3.6K
4.3M
Suhas
Suhas@zuess05·
Serious question. For the last 10 years, society told everyone "just learn to code" to escape the middle class. Now Claude writes the code. What exactly is the career advice for an 18-year-old right now?
English
2.9K
188
4K
269K
ClaudeDevs
ClaudeDevs@ClaudeDevs·
For the developers building with Claude, a direct line from the team. Follow for changelogs, API releases, community updates, and deep dives.
English
510
1.1K
15.9K
4.4M
Claude
Claude@claudeai·
Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.
Claude tweet media
English
4.1K
9.1K
71K
8.3M
Peter Steinberger 🦞
If you look at GPT 5.4-Cyber and it's ability for closed source reverse engineering, I have bad news for you. I do very much feel the pain though, there's hundreds of teams that try to poke holes into @openclaw. Our response has been of rapid iteration and code hardening. Which did introduce occasiaonal regression (and yes you all been yelling at me), but I see as the only way forward. I would be very careful of other open source projects/harnesses that ignore this work and do not publish their advisories. github.com/openclaw/openc…
Bailey Pumfleet@pumfleet

Open source is dead. That’s not a statement we ever thought we’d make. @calcom was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited. Near zero cost. In that world, transparency becomes exposure. Especially at scale. After a lot of deliberation, we’ve made the decision to close the core @calcom codebase. This is not a rejection of what open source gave us. It’s a response to what risks AI is making possible. We’re still supporting builders, releasing the core code under a new MIT-licensed open source project called cal. diy for hobbyists and tinkerers, but our priority now is simple: Protecting our customers and community at all costs. This may not be the most popular call. But we believe many companies will come to the same conclusion. My full explanation below ↓

English
81
96
1.6K
388.9K
Max Wolter
Max Wolter@maxintechnology·
@dariuszparys @steipete I find it funny how people like @dariuszparys bash on people who simply point out that OpenClaw is not a reference when it comes to security models. I think OpenClaw is a great project, and lots of fun. Is it secure for most people? Definitely not. For some experts? Probably.
English
0
0
0
1
Dariusz Parys
Dariusz Parys@dariuszparys·
I find it funny that people like @maxintechnology bash on things that started out of pure excitement. Of course it wasn't secure back then, but hell, it was a project just for fun and turned out into something useful. If you are interested in an idea to use it, help to fill gaps instead of just finger pointing.
English
1
0
1
91
Peter Steinberger 🦞
That was the case in December. 4 months and thousands of work hours later, we have a great security concept; you can go all yolo, use a sandbox (Docker or OpenShell), there are allow-lists and per-access exec allow/deny prompts. There’s hundreds of security researchers that pen-tested it.
Max Wolter@maxintechnology

@steipete @openclaw I don't think OpenClaw is a reference. It literally doesn't have a proper security model. Nothing on OpenClaw is secure by design.

English
74
80
1.4K
352K
Max Wolter
Max Wolter@maxintechnology·
@PawelHuryn @steipete "no perfectly secure setup" "treated as untrusted code execution with persistent credentials"
English
0
0
0
4
Paweł Huryn
Paweł Huryn@PawelHuryn·
@steipete @maxintechnology Where's the credential-swap proxy plugin in the docs? Your /gateway/secrets page shows SecretRef resolving into the agent's in-memory runtime snapshot at activation - that's a config loader, not isolation from a prompt-injected agent.
English
1
0
0
377
Pete ☦️ Νεκταριος
@maxintechnology @steipete @openclaw How do? What credentials get exposed to the LLM? If you are specific, I can be specific, but speaking in generalities is not good. As a matter of fact, it has safety mechanism in place that if an API key gets leaked to the LLM, you get a notification that it need to be rotated.
English
3
0
4
367
Max Wolter
Max Wolter@maxintechnology·
@SusanCMoeller @pumfleet A fast rate of change is an orthogonal concern to the most aligned path. Alignment does not change, its expression might.
English
0
0
0
0
Susan Moeller
Susan Moeller@SusanCMoeller·
@maxintechnology @pumfleet I dont see this as a statement on what is best forever. The speed of change is so fast now that permanent decisions are kind of a laughable idea.
English
1
0
0
11
Bailey Pumfleet
Bailey Pumfleet@pumfleet·
Appreciate the support 🙏 I knew this would be an unpopular decision but I care way more about our customers and the trust we have with them than random haters on Twitter
austin petersmith@awwstn

everybody is furious with @peer and @pumfleet for doing this well, everybody except their actual customers who care a lot more about security than they do about indie devs having the right to self-host enterprise software this is clearly the right call

English
2
0
13
5.3K
Max Wolter
Max Wolter@maxintechnology·
@pa1ar @steipete Is this what you built? If so, good job. Most people don't. And this is your custom tool for your custom needs, so it's limited. It's not a system design.
English
0
0
0
2
Pavel Larionov
Pavel Larionov@pa1ar·
@maxintechnology @steipete of course you can. you restrict read access of the agent to .env, then you build tools that get variables from .env. profit. openclaw has secrets storage and with default system prompt, the model will scream at you, ask to rotate the key you if you send it in plain text via chat
English
1
0
3
588
Max Wolter
Max Wolter@maxintechnology·
@iFiras7 @steipete Exactly. So OAuth is a specific use case. How can that mechanism be secure by design? You can't hard-code for every use case, and heuristics will never be perfect. You are essentially layering a leaky security abstraction on top of an inherently insecure design.
English
1
0
1
127
Firas '
Firas '@iFiras7·
Fair point on the context concern, but credentials don’t actually need to enter the LLM’s context for tool calls to work The model emits a structured call like send_email(to, subject, body).. the auth token gets injected at the executor/proxy layer before hitting the actual API The LLM never sees it This is how MCP servers with OAuth work in production today, and it’s what Peter is describing with proxy-level credential swapping
English
1
0
6
619
Max Wolter
Max Wolter@maxintechnology·
@cyberpeterg @steipete @openclaw Right, so that's getting to the root of the question. If we need a safety mechanism like that, it means that the system is not secure by design. With a good security model, credentials could not leak because it's enforced programmatically.
English
1
0
2
340
Max Wolter
Max Wolter@maxintechnology·
@cyberpeterg @steipete @openclaw My point is that a proper security model would not expose credentials to the LLM. Everyone in this space is just hand-waving the question, like it's not a problem that all of our API keys and passwords end up with OpenAI and Anthropic.
English
1
0
4
2K
Max Wolter
Max Wolter@maxintechnology·
@steipete OK, let me rephrase: can OpenClaw execute tool calls with credentials without passing them to the LLM? Because if it can, you can colour me genuinely impressed and I stand corrected. I have developed a mechanism for this, and I have not seen it discussed anywhere so far.
English
15
0
8
4.1K
Peter Steinberger 🦞
@maxintechnology Yes, you can enable sandboxing and decide what your agent has access too. We also have secure storage for credentials; there’s also plugins who swap these at the proxy level - depending what level of isolation you prefer. You can also fully swap to local models.
English
6
1
58
9.3K
Max Wolter
Max Wolter@maxintechnology·
That's honestly the dumbest take I have heard so far. LLMs will nudge software *towards* open source. A single strong LLM in the hands of the good guys can harden the entire internet. In a closed source world, a single strong LLM in the hands of the bad guys can exploit everyone.
English
0
0
4
658
Bailey Pumfleet
Bailey Pumfleet@pumfleet·
Open source is dead. That’s not a statement we ever thought we’d make. @calcom was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited. Near zero cost. In that world, transparency becomes exposure. Especially at scale. After a lot of deliberation, we’ve made the decision to close the core @calcom codebase. This is not a rejection of what open source gave us. It’s a response to what risks AI is making possible. We’re still supporting builders, releasing the core code under a new MIT-licensed open source project called cal. diy for hobbyists and tinkerers, but our priority now is simple: Protecting our customers and community at all costs. This may not be the most popular call. But we believe many companies will come to the same conclusion. My full explanation below ↓
English
537
166
2K
1.4M