lightcap.eth 🏴

15.6K posts

lightcap.eth 🏴

lightcap.eth 🏴

@lightcap

Heretic.

Beigetreten Ocak 2007
1.1K Folgt788 Follower
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
LiteLLM hack disclosures only getting started.
English
0
0
0
11
Asad Mehmood
Asad Mehmood@Wicaodian·
@hqmank Are running with API key or claude subscription account? What are the risks of being banned using this?
English
1
0
0
3.8K
Kai
Kai@hqmank·
Compiled the leaked Claude Code source and got it running locally. Then I spun up an Agent Team and had it analyze its own codebase.
English
52
94
1.6K
233.8K
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
@RoundtableSpace How long had it been since the last turn? First one after some time is always token heavy since the prompt cache is cleared. All of the next turn other than a handful of tokens would have been cached.
English
0
0
2
1.1K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
SOMEONE TESTED OUT THAT SAYING "HELLO" IN CLAUDE ACTUALLY USES 2% TOKEN USAGE
English
195
100
2.5K
438.7K
4nzn
4nzn@paoloanzn·
say hello to free-code claude code source code fully recompiled, telemetry stripped, security guardrails prompts stripped, all working experimental features enabled including ultraplan mode - a new async agentic planning mode where claude starts a multi-agent research session in the browser lasting between 10 and 30 minutes since i know there are gonna be DMCA strikes on this i've uploaded it on the blockchain on IPFS
4nzn tweet media
English
170
363
4.9K
786.1K
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
@altryne And your Max sub won't work with it in no time. Banned just like the rest of the unsanctioned third party tools. Watch they just raise token prices haha
English
0
0
1
2.7K
Alex Volkov
Alex Volkov@altryne·
If you, like me, just woke up, let me catch you up on the Claude Code Leak (I know nothing, all conjecture): > Someone inside Anthropic, got switched to Adaptive reasoning mode > Their Claude Code switched to Sonnet > Committed the .map file of Claude Code > Effectively leaking the ENTIRE CC Source Code > @realsigridjin was tired after running 2 south korean hackathons in SF, saw the leak > Rules in Korea are different, he cloned the repo, went to sleep > Wakes up to 25K stars, and his GF begging him to take it down (she's a copyright lawyer) > Their team decided - how about we have agents rewrite this in Python!? Surely... this is more legal > Rewrite in Py > Board a plane to SK🇰🇷 > One of the guys decides python is slow, is now rewriting ALL OF CLAUDE CODE into Rust. > Anthropic cannot take down, cannot sue > Is this "fair use?" > TL;DR - we're about to have open source Claude Code in Rust
Alex Volkov tweet media
English
344
1K
11.3K
1.8M
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
I hope this wasn't going around right at the same time the axios hack was in the wild. People switching to npm installs during that time could be pretty brutal.
Alex Volkov@altryne

PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.

English
0
0
0
43
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
I really do prefer the anthropic models for personality, though. Conversations just seem more real and flowing.
English
0
0
0
10
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
I've been working on getting them to converge so that my main agent doesn't seem schizophrenic when I switch models.
English
1
0
0
9
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
Going back and forth between models, specifically GPT-4.5 and Opus-4.6 has been a real eye-opener when it comes to the difference in the texture of the conversations themselves.
English
1
0
0
18
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
Claude is down. Everyone take the day off.
English
0
0
4
572
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
Been wrangling a lot of time how to deal with the onslaught of PRs, none of the solutions that are out there seem made for our scale. I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it. Same for Issues. P rompt R equests really are just issues with additional metadata. Don't even need a vector db. Was thinking way too complex for a while. There's like 8 PRs for auto-update in the last 2 days alone (still need to ingest 3k PRs, only have 1k so far).
Peter Steinberger 🦞 tweet media
English
425
211
4.1K
569.1K
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
Model wars are the new editor wars.
English
0
0
3
42
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
One-off negotiations with publishers is completely unscalable and stifles innovation, particularly for the small upstarts that can't even get a response from the publishers in the first place. Expand that out to the long tail of content creators and it's an utter disaster.
English
1
0
0
29
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
There is no world in which micropayments of some form aren't a necessity for agents. Just flat out blocking content isn't good for the publisher or the public. Crypto fixes this.
English
2
0
0
34
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
Case in point: I want to get pricing data on wine. There are already good sources out there. But they rely on the arms race that is bot detection. It's a waste of effort and time for everyone and worse, it could easily be converted to a revenue stream.
English
0
0
0
24
lightcap.eth 🏴
lightcap.eth 🏴@lightcap·
And it's still more expensive than the token cost to build with it. That's saying something. lol @ $0.005 per post reads without some sort of reasonable filtering.
lightcap.eth 🏴 tweet media
English
0
0
1
33