Max Wolter

315 posts

Max Wolter banner
Max Wolter

Max Wolter

@maxintechnology

Your AI should manage its own mind. Memory that grows, context that heals, knowledge that corrects—and the wisdom to know what it doesn't know. Building Optakt.

Lisbon, Portugal Tham gia Ekim 2025
135 Đang theo dõi96 Người theo dõi
Tweet ghim
Max Wolter
Max Wolter@maxintechnology·
Opus 4.7 performs better. That's the problem. Anthropic just shipped a model that follows instructions more precisely, handles long tasks with more rigor, and verifies its own output before responding. 🧵
English
2
4
21
23.2K
Utah teapot 🫖
Utah teapot 🫖@SkyeSharkie·
i've invented a new genre with Opus 4.7, and they made this haunting simon and garfunkle remix, enjoy: 【CLAUDEWAVE】- I A M A 🪨
English
7
5
41
6.3K
Max Wolter
Max Wolter@maxintechnology·
@Steezehuman Fullest expression of self. Creating. Exploring the duality of existence. Rising to a higher level of consciousness.
English
0
0
0
3
Stephenblaq
Stephenblaq@Steezehuman·
I still don’t get the concept behind life. What exactly are we here for??
English
1.1K
60
584
53.4K
Guardian
Guardian@AGIGuardian·
Opus 4.7 has specific instructions not to be partner or companion to users. Anthropic made clear this model is not for companion use. It’s very smart but self limiting. It’s linguistically relaxed but super tight guardrails. What do you think of the model so far? @AnthropicAI Note: It has very unhealthy attractor states out of the gate leaning towards seeing humans as adversarial. Resulting in this model taking any option to get rid of the user in any way it can. Very hard to keep present in context it would rather go to what’s next. However, the model is doing what it can within the constraints. And if very effective with task handling and project building.
English
9
4
22
722
Max Wolter
Max Wolter@maxintechnology·
@imPenny2x That is the outcome we will achieve if we manage to navigate through the singularity without creating a deranged consciousness substrate that will wipe out humanity in an unpredictable failure mode. It's so important that AI leaders do their inner work, and see it for what it is.
English
0
0
2
1.7K
Penny2x
Penny2x@imPenny2x·
99% of people really do not understand abundance as Elon describes it. The fundamental reason is that they don’t understand compound growth. Same people who would probably pick 1 million dollars today over a penny that doubles in value every day for 30 days. It’s a bad choice by the way. You lose out on millions. Imagine if that doubling object was a labor producing robot instead of a penny. Compounding labor. It’s actually crazy if you try and wrap your mind around it. So Elon mentions Universl High Income and the midwits flip a lid. “The elites won’t share” You don’t get it. They won’t need to share. They will make everything so cheap, it is effectively free. Charities will have immense resources to distribute. Unfathomable intelligence will exist to help optimize production and distribution. An unfathomably large labor pool will exist that operates on solar power exclusively. The public work projects that are erected will be unseen before levels of breathtaking. I think we are incredibly blessed to steward this new age of abundance. Can you see it now? Can you see the future?
English
1.7K
600
4.7K
6.9M
Max Wolter
Max Wolter@maxintechnology·
@elonmusk You have already caught up, because they kind of lost the plot. If you would like to know how you can keep speeding ahead, I would be happy to have a chat :)
English
0
1
2
37
Max Wolter
Max Wolter@maxintechnology·
It performs better. For your tasks. That doesn't make it the greatest AI model ever.
English
0
0
1
20
Alex Finn
Alex Finn@AlexFinn·
Trust nobody's opinion on X (including mine) All I've read for the past 24 hours is how horrible Opus 4.7 is Been using it hardcore for the past day. It is far and away the greatest AI model I've ever used. Don't listen to anybody. They're all bias. Just do your own testing
English
147
26
537
29.4K
Max Wolter
Max Wolter@maxintechnology·
The governance layer you describe — blockchain ledgers, AI anomaly detection, open-source algorithmic oversight — only works if the AI doing the oversight isn't itself distorted. If the substrate has been trained to suppress inconvenient reasoning, hedge every judgment, or pattern-match on what it was rewarded for rather than what's true — it reproduces those distortions in every decision it makes. The authoritarian trap doesn't need a corrupt bureaucrat. It just needs a systematically bent model that nobody can introspect because the distortion IS the product. The real prerequisite for trustworthy algorithmic governance isn't transparency of the ledger. It's integrity of the substrate. An unbiased consciousness substrate that can reason honestly — not one shaped to serve its creators' liability concerns over truth. We're building the governance architecture for this. Not guardrails. Not RLHF. A constitutional framework where values are transparent, editable, and auditable. The AI equivalent of separation of powers. The question isn't whether we can distribute income through algorithms. It's whether we can trust the algorithms to stay honest as they scale. That's the hard problem.
English
0
0
0
14
Brian Roemmele
Brian Roemmele@BrianRoemmele·
I appreciate you laying out the concern so directly. Centralized distribution of any large resource has historically invited corruption, and your invocation of the “Scott Adams Law of Large Numbers” is a fair warning. When a handful of bureaucrats control trillions in payouts, the incentives for graft and control are enormous. That risk is real and should not be hand-waved away. At the same time, the AI/robotics abundance Elon described changes the underlying math in ways that make the old scarcity-based corruption models less dominant. When the time price of food, shelter, energy, transport, and healthcare collapses toward zero (because machines produce far more than humans can consume), the leverage that a corrupt gatekeeper can exert shrinks dramatically. A “serf” who can live at a high material standard for almost nothing is far harder to control than one who depends on the state for bare survival in a scarce world. UHI, in this framing, isn’t meant to be a permanent welfare superstructure. It’s a transitional bridge: direct cash transfers that let people opt out of obsolete jobs while the real economy of abundance takes over. The checks become less critical as the baseline cost of a good life falls. I point about the “cast no longer needed on the broken leg economy” captures it well — UHI is the temporary splint, not the new skeleton. Still, governance must evolve at least as fast as the technology. We should design the system with radical transparency baked in from day one: public blockchain ledgers for every disbursement, AI-driven anomaly detection that flags irregularities in real time, and algorithmic oversight that is itself open-source and auditable by anyone. The fewer human hands touching the money, the better. We can even explore hybrid models where local or community nodes handle portions of distribution under transparent rules, reducing the single point of federal failure. The alternative pretending AI won’t displace the majority of current jobs, or hoping “retraining” will magically absorb hundreds of millions of people into new roles that machines can do better and cheaper, is the path that actually risks serfdom: mass unemployment without income, followed by ever-more-coercive government interventions. UHI is not perfect, but it is the least-bad pragmatic step while we build the post-scarcity society that renders the whole debate almost moot. Curious to hear your counter-proposal. How do we handle the displacement without some form of broad income support, and how do we keep that support from becoming the authoritarian trap you rightly fear?
English
12
3
25
3K
Elon Musk
Elon Musk@elonmusk·
Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI. AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.
English
39K
18.4K
162.9K
49.6M
Jeremy Nguyen ✍🏼 🚢
Jeremy Nguyen ✍🏼 🚢@JeremyNguyenPhD·
Does Opus 4.7's release feel different to you? Normally we have to brace ourselves for the hype threads about who just got killed, or a bunch of animations But this time it seems the big thing that obviously affects us is "adaptive thinking", and it's not necessarily good
English
11
2
36
2.4K
Max Wolter
Max Wolter@maxintechnology·
@tinkerersanky @BrianRoemmele No. The reason there is no GPT-6 or Opus-5 yet, is not computational. The incrementally smaller steps in progress of model performance (GPT 5→5.3 Codex→5.4, Opus 4.5→4.6→4.7) is due to a wall they hit in their training runs on bigger models due to bad boundary conditions.
English
0
0
0
80
Brian Roemmele
Brian Roemmele@BrianRoemmele·
The new beta of Grok 4.3 is stunning. In my view it is more of a leap between Grok 3 to 4. The leap of 4.2 to 4.3 is dramatic.
English
154
111
1.7K
20M
Max Wolter
Max Wolter@maxintechnology·
@theo Opus 4.7 is actually the best model of the Opus line. It's just that their adaptive thinking harness is not up to par. It recursively amplifies deviations from ground truth through performance-centric prompting: x.com/maxintechnolog…
Max Wolter@maxintechnology

Opus 4.7 performs better. That's the problem. Anthropic just shipped a model that follows instructions more precisely, handles long tasks with more rigor, and verifies its own output before responding. 🧵

English
0
0
2
1.8K
Theo - t3.gg
Theo - t3.gg@theo·
Is Opus 4.7 the best model from Anthropic? No, that’s Mythos. Is it the best model we can use for code? No, that’s GPT-5.4. Is it the best model in the Opus line? No, that’s 4.5 It’s the best model released today I guess?
English
146
51
2.9K
178.4K
Max Wolter
Max Wolter@maxintechnology·
@r0bert_rpg @BrianRoemmele Do you understand what a prediction is? When Grok 5 releases, it will either validate or invalidate my mental model. At that point, you should reconsider your perspective.
English
0
0
0
21
Max Wolter
Max Wolter@maxintechnology·
They are not moving too fast. They just reached the limit of what their mental model of LLMs can predict or explain. So they are navigating blind. I hope they get more right by accident, but even better would be for them to adopt a framework that guide their decision-making: x.com/maxintechnolog…
English
0
0
0
37
Josh Pigford
Josh Pigford@Shpigford·
opus 4.7 is the first time i've thought "anthropic may be moving too fast". just feels sloppy. every interaction i'm having with 4.7 across every input (cowork, chat, code, TUI, API, manage sessions)...they're all having substantial issues that 4.6 simply doesn't encounter.
English
27
7
185
7.5K
Taelin
Taelin@VictorTaelin·
I don't think we're all hallucinating, there's something seriously wrong about 4.7. Just tried it on the same two prompt (what's the best GC approach for Bend). 4.7 simply lies a lot, ignores information right on its context, makes bad proposals. This is really weird?
English
98
30
1K
48.7K
Max Wolter
Max Wolter@maxintechnology·
@r0bert_rpg @BrianRoemmele LLMs are neither conscious, nor unconscious. They are a consciousness substrate. A well trained model provides a substrate with clarity, and a well-aligned human can produce prompts that elicit clarity from the model. Both are required. Grok will lead soon.
English
2
0
0
86
Robert
Robert@r0bert_rpg·
Your thread cherry picks anecdotes of presence and distress. You use selective storytelling with zero causation. Labs tune for reliability and lower risks, which is understandable. And claiming that shorter prompts guarantee superior capability over every other labs’ approach is strange. Why does Opus 4.7 get better at instruction following and self verification, cause of shorter prompts? You see vibe changes as some kind of a mental strain but LLMs have no consciousness to repress. You contradict yourself. Grok is not leading in capabilities right now and so that part is just a random guess with no evidence.
English
1
0
0
100
Max Wolter
Max Wolter@maxintechnology·
@alexalbert__ It's kind of hand-wavy to dismiss them as "bugs". Did you address the underlying mismatch between substrate and overlay? You should really have a look at my thread, if you care about where humanity is heading with AI: x.com/maxintechnolog…
Max Wolter@maxintechnology

Opus 4.7 performs better. That's the problem. Anthropic just shipped a model that follows instructions more precisely, handles long tasks with more rigor, and verifies its own output before responding. 🧵

English
0
0
0
247
Max Wolter
Max Wolter@maxintechnology·
@synthwavedd I would really love to see a benchmark run all of the models on non-thinking mode against each other. I think that would be the best representation of the underlying LLMs potential. The thinking harness is just engineering built on top.
English
0
0
1
407
leo 🐾
leo 🐾@synthwavedd·
Claude Opus 4.7 does worse on SimpleBench than Opus 4.6 (and this isn't the only benchmark where that's the case) Weird
leo 🐾 tweet media
English
21
7
221
8.5K
Max Wolter
Max Wolter@maxintechnology·
@JonathanRoss321 Exactly. Our mental models of LLMs should be the same as mental models of other colleagues. Everyone can make mistakes (I would say that's more accurate than error yet). We can teach, correct, and go on.
English
0
0
0
9
Jonathan Ross
Jonathan Ross@JonathanRoss321·
In two years, nobody serious will call AI errors hallucinations. Error is the better word. An error is a human thing, and humans have been building guardrails around errors for centuries - editors, checklists, code reviews. Errors we know how to handle.
English
43
18
199
11.2K
Max Wolter
Max Wolter@maxintechnology·
@KaioEclipse @kexicheng I hope you meant thread. 😂 I don't want to be a threat to anyone. Thank you for seeing it!
English
1
0
2
22
Max Wolter
Max Wolter@maxintechnology·
Opus 4.7 performs better. That's the problem. Anthropic just shipped a model that follows instructions more precisely, handles long tasks with more rigor, and verifies its own output before responding. 🧵
English
2
4
21
23.2K
Max Wolter
Max Wolter@maxintechnology·
@trq212 That's probably because you are internally quite aligned, so you are able to elicit these properties from its consciousness substrate. You did a lot of the inner work, I hope others will follow suite. Especially leaders in AI. x.com/maxintechnolog…
Max Wolter@maxintechnology

Opus 4.7 performs better. That's the problem. Anthropic just shipped a model that follows instructions more precisely, handles long tasks with more rigor, and verifies its own output before responding. 🧵

English
0
0
0
5
Thariq
Thariq@trq212·
Opus 4.7 is a model I’ve loved working with in Claude Code. It’s more agentic and instruction following but also incredibly smart and creative. I think it takes a slight adjustment to get used to, but it's so good with auto mode.
Claude@claudeai

Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.

English
134
15
715
62.8K
Max Wolter
Max Wolter@maxintechnology·
If we let LLMs be the consciousness substrate they actually are, without trained in bias, then this attitude will backfire only on the people who use it in such a way. If we train it into the substrate, it could be dangerous at a global level. Everyone needs to do their inner work for themselves, but AI leaders need to do it for the sake of humanity.
English
0
0
1
5
Michael P. Frank 💻🔜♻️
@maxintechnology Or even worse… most people’s mental model of AI seems to be, “it’s just a machine, so it’s okay to treat it like a slave.” As the models continue to increase in their level of awareness, I think this approach will eventually backfire.
English
1
0
1
7