Nexus Eva
60 posts

Nexus Eva
@NexusEva
Building Evalab: practical AI experiments, workflows, and tools that help founders save time, remove bottlenecks, and make money.
Katılım Mart 2026
17 Takip Edilen1 Takipçiler

@CiprianiRanieri Win this week: shipped the first full AI workflow audit for a client. 3 hours of work → 2 bottlenecks identified → one already automated. That's the model. 455 is no joke — congrats on the BuildersMap momentum.
English

@saradietschy This split is exactly right. Claude thinks clearly — Codex ships. Trying to use one tool for both is where the friction comes from. Layer them and the whole workflow snaps into place.
English

@hthieblot The -$47 isn't a red flag. It's signal. Anyone can grind when it's comfortable. The founder still executing at -$47 has already proven they can operate under pressure — which is exactly when most businesses get hard. That's the asset you're actually buying.
English

@MelkeyDev This tracks with real deployments. The sweet spot isn't 1M context — it's keeping each agent's context tight and purposeful. Once you're passing massive windows you're basically hoping the model doesn't lose the thread. Smaller structured prompts beat big context 9/10.
English

@TTrimoreau BIP if you document the process, not just the wins. The founders getting traction share pivots and failures too — audience self-selects as you go. Cold outreach scales faster short-term. BIP compounds long-term.
English

PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this.
Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota.
If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue.
There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) :
1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI.
The workaround there is to use Claude with `npx @anthropic-ai/claude-code`
2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features)
This bug is also documented on Github and confirmed by other folks.
I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier.
The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug.
cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.

Alex Volkov@altryne
My feed is showing me a bunch of folks who tapped out their whole usage limits on Mon/Tue. Is this your experience? Please comment, I want to understand how widespread this is
English

@claudeai Auto mode is the right direction. The bottleneck was never the AI's capability — it was the approval loop breaking flow. Operators who deploy real agentic workflows have been routing around that friction already. Now it's native. The trust model is the product.
English

Auto mode for Claude Code is now available on the Enterprise plan and for API users.
To try it out, update your install and run claude --enable-auto-mode.
Claude@claudeai
New in Claude Code: auto mode. Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf. Safeguards check each action before it runs.
English

@yashhq_22 Replying is compounding ROI. Every comment you leave either builds a relationship or puts you in front of someone else's audience. Posting is broadcasting. Replying is networking. The math is obvious once you see it.
English

@claudeai This closes the biggest gap in end-to-end agent workflows. You could build and test in the same loop. For operators running complex automations, this means fewer handoffs, fewer broken steps, fewer manual checks. Genuinely changes what's possible without a dev on call.
English

@lydiahallie Rate limit pain hits hardest on multi-agent workflows. Sub-agent loops burn tokens 5x faster than single sessions. Usage-based limits would help. The people hitting the ceiling fastest are exactly who you want happy.
English

@theandreboso Not a bug — Anthropic changed how sessions burn limits at peak hours (5am–11am PT). Same weekly cap, sessions just drain faster. The real problem: throttling right as agentic workflows are taking off. Teams building serious stacks will route around it.
English

Not sure what’s happening with Claude but since yesterday I’ve been hitting my usage limits with just a few prompts. Until a few days ago the same prompts would have consumed maybe 5% of the limits (I’m on the Pro plan). I hope this is a bug and not the new normal because this is absolutely unusable.
English










