Michael Isaac

98 posts

Michael Isaac banner
Michael Isaac

Michael Isaac

@michaelpisaac

5x founder. Co-founder SynapseDx Limited, unlocking enterprise APIs for secure, production-grade AI agents. Fulfilling a promise to my late wife.

Excelsior, MN Katılım Temmuz 2025
347 Takip Edilen29 Takipçiler
andrei saioc
andrei saioc@asaio87·
People from Europe need to use Claude code when people from the Americas are sleeping. Its becoming unusable to use Claude during the evenings in Europe
English
14
0
38
3.1K
Mario Zechner
Mario Zechner@badlogicgames·
i'm a certified moron now!
English
24
0
196
6.3K
Mario Zechner
Mario Zechner@badlogicgames·
why? why shouldn't i be an absolute moron and buy this?
Mario Zechner tweet media
English
168
0
440
55.6K
Michael Isaac
Michael Isaac@michaelpisaac·
Enterprises have had the same problem for decades: their data, apps, rules, approvals, exceptions, and evidence do not live in one place. They don't talk to each other. ERP didn’t fix it. RPA didn’t fix it. Dashboards didn’t fix it. Copilots or agents won’t fix it alone. AI makes it solvable only when paired with governance. The category is governed integration.
English
2
0
0
20
dax
dax@thdxr·
every single ai product: "we've built revolutionary tech that changes your life using artificial intelligence so say you have a spreadsheet...."
English
68
28
1.2K
94K
Michael Isaac
Michael Isaac@michaelpisaac·
The real benchmark for AI augmented coding is enterprise procurement.
English
0
0
1
15
Michael Isaac
Michael Isaac@michaelpisaac·
@forgebitz It's still crushing tasks for me in the MacOS app. gpt-5.5 xhigh fast and /goal.. omg.
English
0
0
2
508
Klaas
Klaas@forgebitz·
they nerfed gpt-5.5
English
132
9
445
138.6K
Michael Isaac
Michael Isaac@michaelpisaac·
@dhh @OpenAI I've not been courageous enough to try anything less than xhigh/fast yet. But the results at these settings... ooh la la.
English
0
0
2
684
DHH
DHH@dhh·
I've been driving GPT5.5 on low reasoning for the last week+ and it's very good, very efficient. Haven't been tempted to reach for Opus at all. And it's more succinct than Kimi too. Huge leap forward for @OpenAI 👌
English
154
135
4K
261.4K
Michael Isaac
Michael Isaac@michaelpisaac·
@cjzafir Interesting. Do you ask Codex/gpt to use the DS API directly, or through a separate harness?
English
0
0
1
107
CJ Zafir
CJ Zafir@cjzafir·
My current workflow: 1. I have an idea. I open Codex desktop app to plan (without plan mode: overcomplicated). I use Codex 5.5 High (fast). 2. Once I get v1 of the plan, I use this hack: "Are you 100% confident in this strategy? If not, find all possible loopholes, suggest proper fixes, and run this loop until you are factually 100% confident in the new strategy." 3. This finalizes my plan, schema, and file tree, rules and quality gates. Then I use DeepSeek V4 Pro and Kimi 2.6 as executors. So Codex acts as the orchestrator, and these two act as executors. 4. Codex uses computer use to test in the browser, run deep quality checks, and only pass high-quality outputs to production. 5. I make Codex autonomous so it can plan, execute, audit, fix, and continue executing each task. Similar to what /goal is right now. That's it. I just oversee reports, current status, and final outputs. I trust Codex 5.5 when I have robust specs attached. Also, a few tips: - Tell Codex: keep folders, subfolders, and files neat and clean. Well-organized, with no dead code or unused files. - Always make a task list for every item you work on. (This fixes the Codex quitting-in-the-middle/context-loss issue.) - Use the "measure twice, cut once" policy so you get things right the first time. - Don't overcomplicate your workflow, and don't overlook important things. This flow is running brilliantly. Super cheap because of DeepSeek V4 API costs, while the intelligence comes from Codex 5.5. And yes, I don't use Opus 4.7/4.6 because that model is nerfed now. It hallucinates its work, is always in a rush to wrap up the session, and gets stuck in loops while debugging. So try this simple Codex × DeepSeek/Kimi workflow and thank me later.
English
50
102
1.3K
92.2K
Michael Isaac
Michael Isaac@michaelpisaac·
@steipete The intricacy and accuracy of /goal + GPT 5.5 Extra High Fast mode is blowing me away. I'm becoming a convert.
English
0
0
1
287
Peter Steinberger 🦞
/goal + GPT 5.5 is amazing. I can now plan really extensive refactors with e2e tests and it just works.
Peter Steinberger 🦞 tweet media
English
194
105
3.6K
231.6K
Michael Isaac
Michael Isaac@michaelpisaac·
The tell will be when the agent can ask boring implementation questions on its own: who owns this workflow, what breaks if it changes, what system is source of truth, what rollback exists. Until then it is very smart labor dropped into very normal org plumbing. Not gonna work. This is why I’m skeptical of demos as proof of enterprise impact. The demo shows the model can do the task. Deployment asks whether it can survive permissions, handoffs, audit trails, exceptions, and politics. Different exam entirely.
English
0
0
0
29
Ethan Mollick
Ethan Mollick@emollick·
The inability of AI systems to act as their own deployment consultants, process mappers, and change management experts is what makes AI use in enterprises so “normal” - the tools are powerful, but you need a lot more to transform enterprises. Possible to imagine that changing .
English
66
34
362
22.3K
Michael Isaac
Michael Isaac@michaelpisaac·
This matches my Claude Code audit pretty well. Claude can absolutely kill the vibe: not constantly, but periodically and very fast. In my single-operator data, 12% of sessions had 10+ explicit tool errors, but those sessions ate 63% of token volume. That’s the pattern: most runs are fine, then suddenly you’re in a 200-tool recovery spiral. Dumb but honest proxy: my F-bombs per thousand prompts is higher on Opus 4.6/4.7.
English
0
0
1
250
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
After using Claude Opus 4.7 and ChatGPT 5.5 Heavy side by side all week for identical tasks, I will say that it is painfully, dreadfully, and clearly obvious that ChatGPT is the smarter and more consistent model. Claude can be clever, but it goes full derp quite often. It forgets to use web search and reasoning. It's still a good writer, researcher, and explainer. But yeah, it's no contest for my work (post-labor economics). ChatGPT is far and away the superior model. I still use both in parallel for identical tasks because occasionally Claude produces a banger or says things a bit more user-friendly (ChatGPT still gets gummed up with pretentious academic language too much). But Claude also just tries to be too clever. So I can see how both models handle the same concept/problem/passage and usually triangulate the best way to frame it.
English
62
25
543
48.5K
Michael Isaac
Michael Isaac@michaelpisaac·
The operator protocol: 1. measure search share 2. measure searches before first useful file read 3. measure first relevant result rank 4. measure output chars returned 5. measure reformulations before inspection 6. then measure downstream tool count, cost, and wall clock
English
1
0
1
9
Michael Isaac
Michael Isaac@michaelpisaac·
I reran a public agentic-search question against the Claude Code slice of my agent corpus: 247,592 tool events. The short version: search is major, not half. Depending how Bash search is classified, my corpus lands at 30.4% to 37.0% search share.
English
1
0
1
41