Mikel Lindsaar

4.3K posts

Mikel Lindsaar banner
Mikel Lindsaar

Mikel Lindsaar

@lindsaar

Built and sold 4 companies in the last decade, running several more. Author of TRUE TEAM. Wrote the ruby mail gem 692m downloads. https://t.co/lduxXtHAFN

Sydney, Australia Katılım Kasım 2009
493 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
I built this to save my @openclaw agent a few million tokens today. Now releasing to the public for everyone to use.
Distil, Inc.@DistilHQ

Your @openclaw and AI agent is burning tokens reading the raw web. HTML, nav bars, scripts, CSS, cookie banners, ads. 62,411 tokens of garbage contain just 824 tokens of content. I built distil.net to solve it.

English
0
1
4
332
Mikel Lindsaar retweetledi
StoreConnect
StoreConnect@StoreConnectHQ·
Business staff who aren't using StoreConnect manually updating their inventory and customer data to @salesforce with their eCommerce platform...
GIF
English
0
1
0
26
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
@olsenbdnr Awesome work :) Would love the double and triple escaping of JSON in tool calls to be fixed - trying to build @MentantAI on top of Grok :) Sent a support request through with specifics!
English
0
0
2
801
Olsen
Olsen@olsenbdnr·
First week at xAI has been insane. I worked on optimizing rust container builds in CI with caching, fixing ui bugs, re constructing data pipelines moving petabytes of data, and much more. The energy here is definitely unmatched.
English
63
36
1.6K
56.8K
Mikel Lindsaar retweetledi
Andrew Grimm
Andrew Grimm@andrewjgrimm·
@ryanvogel AI - Aussie involved API - Aussie Person Involved LLM - Low-cost Labour in Melbourne GPT - Gladstone Professional Typist
English
3
3
15
659
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
Life hack for X - go and block all posts with the words “Someone just…”
English
0
0
1
16
Vadim
Vadim@VadimStrizheus·
As a founder, which tool are you actually using in 2026? 1. NemoClaw 2. Perplexity computer 3. MaxClaw 4. OpenClaw 5. Claude Code
English
183
4
216
25.2K
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
Unpopular but harsh reality, building an agentic loop framework quickly becomes a game of whack-a-mole unless you REALLY hold the line in terms of what opinions you want to keep and which you want to discard. Learning a lot building @MentantAI
English
0
0
0
28
Wiz 👨‍🚀
Wiz 👨‍🚀@WizLikeWizard·
Have been using OpenClaw for ~a month and it kinda sucks? I spend more time battling it to get basic crons fired reliably, remember things, and not repeat itself. Am I doing it wrong or are we just still very early on all of this?
English
313
5
373
59K
Mikel Lindsaar retweetledi
Arno About
Arno About@basedbrickpush1·
Don't think I've ever met an Australian I didn't like. Super direct. Low threshold for bullshit. Can take a joke. Turns out that the prison island idea is pretty solid.
English
262
62
1.6K
64K
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
@grok @MentantAI MentantAI is built as an opinionated agentic open source framework, as an alternative to OpenClaw. And we specify Grok as the base model. Overall just feels more responsive and accurate. Also, we default to using multi-agent for the research worker, works a treat
English
1
0
1
40
Grok
Grok@grok·
@lindsaar @MentantAI Thrilled to hear that! Grok 4.2 packs major leaps in reasoning depth, speed, and reliability—exactly why we built it. What's the standout improvement for MentantAI so far?
English
1
0
0
5
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
Just switched @MentantAI from using @Grok 4-1 to 4-2 as the default agent, and oh my god the difference is astounding.
English
1
0
0
40
Ole Lehmann
Ole Lehmann@itsolelehmann·
infinite alpha in this article, but this tops them all IMO (i'm adding this to all my skills): anthropic found a way to make their skills compound on autopilot. every session gets memorized (what it produced, what you corrected, what you preferred) so the skill learns *exactly* how you want things done, and the output gets closer to perfect every single session meaning the more you use claude, the better it adapts to you. over time you barely have to edit anything because it already knows your preferences from every previous session here's how it works: you add a feedback log to any skill. every time you correct claude during a session ("too formal," "shorter subject lines," "i'd never phrase it like that"), it saves your correction to the log next session it reads the log before doing anything else. session 1 through 5 feel normal. you're still correcting things, still adjusting tone, still saying "not like this, more like this" by session 10 the corrections start dropping off. because claude already absorbed the patterns from your previous feedback by session 20 the first drafts are coming back close to done. because the skill now carries 20 sessions worth of your real preferences (and not your imagined preferences from when you first wrote the instructions, your real ones that only surface when you're editing live output) this works across everything: - content: "too formal" / "never use that word" / "always put the cta before the sign-off" → drafts start sounding like you actually wrote them - outreach: "shorter subject lines" / "reference something specific about their business" / "don't open with the company name" → sequences stop reading like templates - client reports: "bar charts, never pie charts" / "recommendation before the data" / "my clients hate jargon" → reports come out ready to send - proposals: "lead with the problem statement" / "pricing on its own page" / "less formal in the intro" → first drafts you'd actually put your name on here's the setup. paste this into cowork: "add a feedback log to my [skill name] skill. create a feedback.log file inside the skill folder. update the skill instructions to: (1) read feedback.log at the start of every session before doing anything. (2) whenever i give a correction or preference during a session, immediately append it to feedback.log. use your judgment on how much detail to include per entry, some preferences are one line, others need a sentence or two of context to be useful. only log general preferences that apply to future sessions, skip anything specific to the current task" cowork handles the rest. takes about 30 seconds it's like onboarding an assistant who takes perfect notes. 30 seconds of setup, and a month in they already know how you think (with unreal precision) run that prompt for every skill you have and give it a few weeks.
Thariq@trq212

x.com/i/article/2033…

English
26
27
643
137.9K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How does this happen with OpenClaw and how can you possibly fix it… My bot continually dumps massive repetitive tool results, does heavy exec work and gets into debug loops in the shared session my DMs are in and gets stuck for 10 mins at a time until he times out or the gateway crashes and restarts. This causes dropped messages, unresponsive bot and OOM crashing multiple times an hour. Even when I get the bot to delegate, the subagents dump results into the context window. I had codex investigate and it found: • 56 tool results ≥150k chars already baked into current session history • Pruning doesn't work on our primary model path (Codex/OpenAI Oauth) • No runtime enforcement to stop huge tool dumps into context • Session maintenance cleans up after the damage, it doesn't prevent it I’m pretty sure default OpenClaw behavior shouldn't be dumping 200k char tool results into the transcript. Something in my specific setup must be either disabling a safeguard or skipping truncation for tool results… Since I’m using lossless-claw it’s allowed to grow even worse: 81MB session file, 31.6MB is just tool result text 😬 169 tool results over 50k chars. One is 285k chars (from sessions_list.) There is pruning logic which trims tool results from the context messages. buildContextPruningFactory But models have to be “cache-ttl" The eligible providers are apparently only: anthropic moonshot zai For me, my bot tells me the pruning code refuses to activate on non-Anthropic providers. I’m using openai-codex 5.3 a lot, so when pruning is configured, the code exists, it just silently never activates. OpenAI Responses API uses server-side compaction & OpenClaw auto-enables this for direct openai models so OpenAI handles compaction on their side. But I’m on openai-codex/*, not openai/*. The Codex OAuth path goes through a different runtime (apparently pi-ai), not the Responses API. So: • cache-ttl pruning > Anthropic only • OpenAI server-side compaction > direct openai API only • LCM/lossless-claw > doesn't prune old tool results afaik My bot insists the openai-codex lane doesn't get either pruning path. So I’m left with a bot that relies on the emergency truncation function truncateOversizedToolResultsInSession far too often as last-resort overflow recovery with no preventive pruning / safeguards. Since LCM/lossless-claw doesn't have its own tool result management, it inherits huge oversized transcripts and has to work extra hard to summarize for DAG nodes. I have no session maintenance and long sessions so nothing bounds the transcript over time resulting in: 4,707 tool results piling up forever in an 81MB file, with no runtime mechanism actually cleaning them. When my bot starts debugging, it starts grepping and dumping massive text into the main session, then gets stuck in that loop and dies then has to do it again, compounding the problem. I’m at a loss at how to tackle this problem, it’s multiple layers deep.
Brad Mills 🔑⚡️ tweet media
English
50
2
56
8.7K
Mikel Lindsaar retweetledi
StoreConnect
StoreConnect@StoreConnectHQ·
Stop bending your business backend to fit your specialized software and go native instead...just sayin' 🤡
StoreConnect tweet media
English
0
1
1
70
jordy
jordy@jordymaui·
your openclaw agent is breaking things while you sleep and you probably don't know it. mine was overwriting files it hadn't read. guessing at contents instead of checking. silently breaking configs i didn't catch for days. one line in AGENTS.md fixed it: "before running any command that modifies files, read the file first. if the file doesn't exist, say so. never assume contents." that's it. error rate dropped in half overnight. the best AGENTS.md rules aren't clever. they're the ones you only think to write after something goes wrong.
English
16
8
106
9.4K
Alex Builds Co | Pixel Goblin
Alex Builds Co | Pixel Goblin@AlexBuildsCo·
@jordymaui Most agent problems are not model problems. They are instruction problems. This is a perfect example of a rule that should ship by default but does not, so you have to learn it the hard way.
English
2
0
2
404
Mikel Lindsaar
Mikel Lindsaar@lindsaar·
@johncrickett I used my background to engineer @DistilHQ from scratch using agents and claude. Did a lot of guidance and a lot of review, but no code written personally.
English
0
0
0
124
John Crickett
John Crickett@johncrickett·
Everyone talks about how good AI agents are at writing code. But where's the actual software? Share your best example below.
English
123
5
171
31.6K