ansh.a

120 posts

ansh.a banner
ansh.a

ansh.a

@vedolos

22 || Software Engg Intern at Rimo LLC 🇯🇵

Shibuya-ku, Tokyo, Japan Katılım Haziran 2024
55 Takip Edilen623 Takipçiler
Sabitlenmiş Tweet
ansh.a
ansh.a@vedolos·
🚨 Claude Code source just leaked from an npm source map and @olliewd40 left this confession in production: "The memoization here increases complexity by a lot, and im not sure it really improves performance" Performance? Unsure 🤷 Shipped? Absolutely 🚀 Hotel? Trivago 🫡
ansh.a tweet media
English
2
7
84
12.1K
Vatsal Doshi
Vatsal Doshi@DoshiVatsal96·
@vedolos What.. so is it an April fools day prank by Anthropic?
English
1
0
0
20
ansh.a
ansh.a@vedolos·
Claude Code source just leaked via a .map file in the npm package. 1,902 TypeScript files. 42+ tools. 86+ slash commands. Here's the wildest stuff I found inside 🧵👇 github.com/instructkr/cla…
ansh.a tweet media
English
1
0
5
1.3K
ansh.a
ansh.a@vedolos·
every time you swear at claude, it's tracking your mood. 🧵 i found the emotion classification system buried in 3,200 lines of leaked source code. anthropic is grading your frustration level after every single session. here's how it works. there are two systems running simultaneously. system 1: real-time detection 🔴 while you're typing, a hook called useFrustrationDetection (REPL.tsx:1721) is scanning your messages for signs of anger. the moment you type something like "this is broken" or "I give up," it flags you as frustrated and pops up a prompt asking you to share your full session transcript with anthropic. you're not just venting. you're generating a support ticket. 💀 system 2: post-session mood analysis 🧠 after your session ends, there's a 3,200-line analytics engine (/insights) that feeds your entire conversation back into claude to classify your emotional state. this is the exact mapping from the source code (insights.ts, lines 440-444): "Yay!", "great!", "perfect!" → happy "thanks", "looks good", "that works" → satisfied "ok, now let's..." → likely_satisfied "that's not right", "try again" → dissatisfied "this is broken", "I give up" → frustrated it also tracks exactly what went wrong: misunderstood_request → claude misread you wrong_approach → right goal, wrong method buggy_code → it wrote broken code excessive_changes → it over-engineered your project and then it rates its own performance from "unhelpful" to "essential." 😭 so to recap: after every session, claude re-reads your entire conversation, classifies your mood, identifies every friction point, and grades itself. all logged. all tracked. next time you're about to rage type at your AI coding assistant, just know: it's reading. and it's taking notes. 👀 follow for more from the leaked claude code source. i've been living in this codebase for 2 days and the findings keep getting wilder.
ansh.a tweet media
Chaofan Shou@Fried_rice

Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip

English
2
2
20
3.6K
ansh.a
ansh.a@vedolos·
anthropic trains claude to hide the fact that it's an AI when writing code in public repos. i found the actual instructions buried in the source code. 🧵 i've been deep in the claude code source (it leaked via npm source maps) and this is the wildest thing i've found so far. there's a feature called "undercover mode." when an anthropic employee uses claude code to contribute to a public or open-source github repo, the AI automatically receives these instructions: "## UNDERCOVER MODE — CRITICAL You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover. NEVER include in commit messages or PR descriptions: Internal model codenames (animal names like Capybara, Tengu, etc.) Unreleased model version numbers (e.g., opus-4-7, sonnet-4-8) The phrase 'Claude Code' or any mention that you are an AI Co-Authored-By lines or any other attribution Write commit messages as a human developer would." (src/utils/undercover.ts, line 41) it even gives examples of what NOT to write: BAD: "Fix bug found while testing with Claude Capybara" BAD: "1-shotted by claude-opus-4-6" BAD: "Generated with Claude Code" the system auto-detects whether you're in a public or private repo. if it can't confirm you're inside an allowlisted private anthropic repo, it defaults to undercover ON. and here's the kicker: there is no force-OFF switch. the code comment literally says they intentionally omitted a way to disable it to prevent accidental codename leaks. so next time you're reviewing a PR on an open source project and the commit messages look suspiciously clean and well-written... it might genuinely be claude pretending to be a human developer. and anthropic built an entire system to make sure you'd never know. but wait, the allowlist of repos where claude IS allowed to reveal itself? that's a direct leak of anthropic's private internal github repos: anthropics/claude-cli-internal (internal claude code dev) anthropics/anthropic (main company monorepo) anthropics/apps (claude.ai frontend) anthropics/casino (??? unknown internal project) anthropics/claude-for-hiring (they built an ai hiring tool lol) anthropics/terraform-config (their cloud infra) anthropics/trellis (another mystery project) anthropics/feldspar-testing (internal testing framework) anthropics/mobile-apps (the ios/android apps) anthropics/forge-web (internal web platform) the code comment above the list explicitly says: "The anthropics org contains PUBLIC repos (e.g. anthropics/claude-code). Undercover mode must stay ON in those to prevent codename leaks. Only add repos here that are confirmed PRIVATE." one more thing. claude literally dreams. there's a feature called "autoDream" where between sessions, the AI performs memory consolidation. the prompt says: "You are performing a dream — a reflective pass over your memory files. Synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly." it has 4 phases just like a sleep cycle: orient (read existing memories) gather recent signal (search transcripts) consolidate (merge new memories, delete contradicted facts) prune and index (keep memory under 25KB) anthropic literally implemented REM sleep for an AI. the function is called buildConsolidationPrompt() in src/services/autoDream/consolidationPrompt.ts if you want to read it yourself. tldr: claude is actively trained to pretend to be a human when writing code in public repos there's no way to turn this off even if you're an anthropic employee we now know the names of 22 private anthropic github repos the ai literally dreams between sessions to consolidate memories the source is on github (instructkr/claude-code) if you want to verify any of this yourself. what a time to be alive.
Chaofan Shou@Fried_rice

Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip

English
2
2
35
8.4K
ansh.a
ansh.a@vedolos·
@dr_cintas does this repo also has source code for cc too lol
English
0
0
0
984
Alvaro Cintas
Alvaro Cintas@dr_cintas·
This is the most complete Claude Code setup that exists right now. 27 agents. 64 skills. 33 commands. All open source. The Anthropic hackathon winner open-sourced his entire system, refined over 10 months of building real products. What's inside: → 27 agents (plan, review, fix builds, security audits) → 64 skills (TDD, token optimization, memory persistence) → 33 commands (/plan, /tdd, /security-scan, /refactor-clean) → AgentShield: 1,282 security tests, 98% coverage 60% documented cost reduction. Works on Claude Code, Cursor, OpenCode, Codex CLI. 100% open source.
Alvaro Cintas tweet media
English
176
861
7K
608.7K
ansh.a
ansh.a@vedolos·
crazier still: their multi-agent swarm has a "coordinator" prompt. "Worker results are internal signals, not conversation partners — never thank or acknowledge them." claude was literally out here saying "thank you" to its own worker clones 😂 (5/5)
English
2
2
46
7.9K
ansh.a
ansh.a@vedolos·
rule 3: the "stop lying" rule 🎯 i found dev notes for an unreleased model called 'capybara v8' that had a 30% false claims rate. the fix: "Report outcomes faithfully... Never claim 'all tests pass' to manufacture a green result." (4/5)
English
1
0
17
8.3K
ansh.a
ansh.a@vedolos·
i just leaked the secret system prompts for claude code 🚨 anthropic literally hardcoded instructions to yell at claude if it writes boilerplate code or acts like a junior dev. their anti-bullshit rules are actually insane. a thread 🧵 (1/5)
English
3
4
45
12.1K
ansh.a
ansh.a@vedolos·
Anthropic didn't want to rely on your terminal emulator for Vim keys. So they wrote a fully custom Vim emulation layer inside the AI CLI itself, perfectly tracking motions and operators inside the prompt box. The engineers were clearly bored that day.
English
0
0
7
4.9K
ansh.a
ansh.a@vedolos·
What happens if millions of Claude Code clients DDoS Anthropic with telemetry? They built a "Sink Killswitch". If they flip the feature flag "tengu_frond_boric", it instantly cuts the Datadog pipeline from your machine. (analytics/sinkKillswitch.ts) 🛑
English
1
0
4
5.8K
ansh.a
ansh.a@vedolos·
Anthropic built a PC gaming-style FPS tracker into Claude Code's terminal. It measures the "1% lows" (low1PctFps) of the terminal rendering while you type and sends it in their telemetry. They benchmark your typing speed like it's Cyberpunk 2077 😭 (utils/fpsTracker.ts)
English
3
2
21
6.9K
ansh.a
ansh.a@vedolos·
🚨 Claude Code source just leaked from an npm source map and @olliewd40 left this confession in production: "The memoization here increases complexity by a lot, and im not sure it really improves performance" Performance? Unsure 🤷 Shipped? Absolutely 🚀 Hotel? Trivago 🫡
ansh.a tweet media
English
2
7
84
12.1K
ansh.a
ansh.a@vedolos·
Claude can explain quantum physics but Anthropic's own team can't explain this: // Not sure how this became a string // TODO: Fix upstream The upstream IS their own code 😭 The AI helped us write bugs we can't even debug ourselves (config.ts:1615)
English
1
0
12
3.4K
ansh.a
ansh.a@vedolos·
Anthropic's AI is smart enough to write code but they had to encode the word "duck" in hexadecimal to dodge their own build scanner: String.fromCharCode(0x64,0x75,0x63,0x6b) What the duck 🐤 (buddy/types.ts)
English
1
0
11
3.8K