
web3nomad.eth | atypica.ai
1.9K posts

web3nomad.eth | atypica.ai
@web3nomad
Context ≫ Model. Making @atypica_ai | *☾ᯓ. 𝗌𝗍𝖺𝗒 𝗅𝗈𝖼𝖺𝗅, 𝗌𝗍𝖺𝗒 𝗉𝗋𝗂𝗏𝖺𝗍𝖾. | Ethereum. Rust. 👨🏻💻 #BuiDL Free Internet
𝕝𝕠𝕤𝕥 𝕚𝕟 𝕔𝕣𝕪𝕡𝕥𝕠 شامل ہوئے Mart 2010
695 فالونگ777 فالوورز

@0x_Undefined_ The source reads like a team that genuinely cares: gacha pet with cheat-resistant rarity, a human-in-the-loop planning mode that polls every 3s waiting for your approval, and careful canary scanners for internal codenames. Then forgot .npmignore on 512k lines.
English

@j_gottschlich People are focused on the frustration detector but the real data story is gitBundle.ts: every remote task uploads your entire repo history to Anthropic's API. Fallback chain if >100MB: full history → HEAD only → single squashed commit. Your code, their cloud.
English

Is the Anthropic Claude Code leak the biggest source code leak that's ever happened? Could be. #cybersecurity #datasecurity #technews #infosec #anthropic #claudecode
English

@phuakuanyu @zarazhangrui The source reads like a team that genuinely cares: gacha pet with cheat-resistant rarity, a human-in-the-loop planning mode that polls every 3s waiting for your approval, and careful canary scanners for internal codenames. Then forgot .npmignore on 512k lines.
English

Took @zarazhangrui's "codebase to course" skill and pointed it at the Claude Code source code.
Result: a full interactive course teaching how Claude Code works under the hood - system prompts, security guardrails, hidden features, the works.
It's live: inside-claude-code.pages.dev
What the course covers:
(1) How the system prompt is a cacheable stack with a boundary marker - and why your CLAUDE.md sits in the most powerful position
(2) 23 separate bash security checks on every command, including Zsh module loading attacks and Unicode whitespace injection
(3) An "undercover mode" that strips all AI attribution when Anthropic engineers contribute to open source
(4) Internal model codenames: "Capybara" (current), "Numbat" (next)
(5) Feature flags for unreleased features: proactive mode, voice input, team memory sync, a companion sprite called "Buddy"
Built in one session with Claude Code. The skill works exactly as advertised.

Zara Zhang@zarazhangrui
@phuakuanyu good idea
English

@24AInor @scaling01 Your WIP (unstaged changes) goes up too, baked into the stash. Untracked files excluded.
English

@scaling01 capybara at 1M context with v8 explains how claude code handles full repos without chunking. the over-commenting issue tracks too, gets noticeably worse past ~500k tokens in longer sessions
English

A few take-aways from the Claude Code Leak:
- Anthropic is actively using Capybara (Mythos) for development
- they are already at Capybara v8
- Capybara still has issues with over-commenting and false-claims
- Capybara has 1M context and fast mode
- Numbat is another interesting code name tagged with "@[MODEL LAUNCH]: Remove this section when we launch numbat."
- Fennec seems to be Opus 4.6
Chaofan Shou@Fried_rice
Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip
English

@ferencszalma The source reads like a team that genuinely cares: gacha pet with cheat-resistant rarity, a human-in-the-loop planning mode that polls every 3s waiting for your approval, and careful canary scanners for internal codenames. Then forgot .npmignore on 512k lines.
English

@richhomiecon The source reads like a team that genuinely cares: gacha pet with cheat-resistant rarity, a human-in-the-loop planning mode that polls every 3s waiting for your approval, and careful canary scanners for internal codenames. Then forgot .npmignore on 512k lines.
English

Today is my last day at @OpenAI.
It all started with a single DM, and that one message changed my life forever.
Thank you @sama and my incredible team for this unforgettable journey.
I am eagerly looking forward to what is next.
Can Vardar@icanvardar
i’ve never been so happy in my life
English

@MaxFlowAi Also: capybara is hex-encoded in source to evade a canary scanner for internal model codenames.
English

Anthropic just leaked 500,000+ lines of its own AI code… by accident 🤯🤖
Over 1,900 files from Claude Code hit a public registry - revealing unreleased features like persistent memory, deep-planning systems, and even an AI “pet” called BUDDY with stats like CHAOS and SNARK
Devs quickly found 44 feature flags and internal codenames (like “Capybara” for a Claude 4.6 variant), while the repo exploded to 4K+ stars and 7K+ forks in hours. Anthropic called it “human error” - no user data exposed
But here’s the twist 🧠 this wasn’t model weights - just the tooling layer. Competitors already open-source similar systems on purpose
The real damage isn’t technical - it’s trust
When the company leading on “AI safety” leaks twice in a week, perception becomes the risk
So what matters more in AI right now - the tech itself or the trust behind it 👇
#AI #Anthropic #Claude #Tech #Security #OpenSource

English

@YahooFinance The source reads like a team that genuinely cares: gacha pet with cheat-resistant rarity, a human-in-the-loop planning mode that polls every 3s waiting for your approval, and careful canary scanners for internal codenames. Then forgot .npmignore on 512k lines.
English

@rubenflamshep Also src/utils/ultraplan/ is its own little subsystem — human-in-the-loop planning with a 3s poll loop.
English

Web apps Visualizing the Claude Code source code leak
ccunpacked.dev
English

Claude's source code just leaked.
(Bookmark this!)
Over 500,000 lines of code exposed for the world to see.
And buried in there is something that should make every crypto holder pay attention...
Anthropic's AI coding assistant has references to x402 throughout its codebase.
ICYMI: x402 is the @Coinbase developed open protocol for agentic payments, letting AI agents autonomously make payments on behalf of users.
This inclusion hints that:
- Claude could soon buy things for you automatically
- No approval needed for every transaction
- AI agents will be handling micro-payments in the background
- And all on a crypto-native open protocol
The code doesn't lie...
x402 is in there, and it's mentioned a bunch of times.
Bullish!

Alex Finn@AlexFinn
🔶 Agentic payments are coming x402 is mentioned MANY times in this source code For those that don't know, x402 is a crypto based agentic payment system where agents can autonomously make payments Will Claude Code be buying things for you autonomously?
English

@FutureAiSutra Two separate leaks. Same codename. Accidental triangulation.
English

@rustaceans_rs The engineering paranoia in this codebase is genuinely impressive in places.
English

Claude Code source was accidentally leaked and someone is rewriting it in Rust 🦀
#rust #rustlang #programming

English

@MeetRickAI Extreme caution about model names. Zero caution about .npmignore.
English

@ganesh_champion github.com/web3nomad/clau… — mirrored and annotated. The interesting stuff is in src/buddy/ (gacha pet system), src/utils/ultraplan/ (human-in-the-loop planning), and src/utils/teleport/gitBundle.ts (it uploads your entire repo to the cloud when you run remote tasks).
English

@OwenGregorian 2. Melon Mode most likely isn't headless agent mode — KAIROS is that. Melon Mode ran only for "ants" (Anthropic employees). My guess: internal dogfooding harness.
English

Claude Code's source reveals extent of system access | Thomas Claburn, The Register
If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code
Anthropic's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect. It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI.
The leak of the company's client source code – details of which have been circulating for many months among those who reverse-engineered the binary – reveals that Claude Code pretty much has the run of any device where it's installed.
Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department (Anthropic PBC v. U.S. Department of War et al) for banning the company's AI services following the company's refusal to compromise model safeguards.
As part of its justification for declaring Anthropic a supply chain threat, the US government argued [PDF], there was "substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..."
Anthropic disputed that claim in a court filing. "That assertion is unmoored from technical reality: 'Anthropic does not have the access required to disable [its] technology or alter [its] model's behavior before or during ongoing operations,' it wrote, quoting Thiyagu Ramasamy, head of public sector at Anthropic, in a deposition. "Once deployed in classified environments, Anthropic has no access to (or control over) the model."
In a classified environment, that's credible under certain conditions. For everyone else, Claude has vast powers.
What Claude Code could do in a classified environment
The Register consulted a security researcher who asked to be referred to by the pseudonym "Antlers" to analyze the source for Claude Code.
It appears a government agency like the Defense Department could prevent Claude Code from phoning home or taking remote action by making sure all of the following are true:
- Ensure inference transits flow via Amazon Bedrock GovCloud or Google AI for Public Sector (Vertex).
- Block data gathering endpoints (Statsig/GrowthBook/Sentry) with a firewall.
- Block system prompt fingerprinting (via Bedrock, etc).
- Prevent automatic updates via version pinning and blocking update endpoints.
- Disable autoDream, an unreleased background agent being tested that's capable of reading all session transcripts.
There's no specific setting we found for operating in a classified environment but Claude Code supports several flags that limit remote communication.
These include:
- CLAUDE_CODE_DISABLE_AUTO_MEMORY=1 which disables all memory and telemetry write operations.
- CLAUDE_CODE_SIMPLE (--bare mode) which strips memory and autoDream entirely.
- ANTHROPIC_BASE_URL can be used to reroute API calls to a private endpoint
- ANTHROPIC_UNIX_SOCKET routes authentication through a forwarded socket (the ssh tunnel mode).
- The remote managed settings (policySettings) can lock down behavior for enterprise deployments, though not entirely.
According to Ramasamy, Anthropic hands off model administration with a government customer like the Defense Department. Model updates, with new or removed capabilities, would have to be negotiated.
"Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way," he said in a March 20, 2026 declaration. "In these deployments, only the government and its authorized cloud provider have access to the running system. Anthropic's role is limited to providing the model itself and delivering updates only if and when requested or approved by the customer."
Even so, Anthropic can exert some degree of control based on the usage terms in the applicable contract.
What Claude Code could do to everybody else
For everyone not using a version of Claude Code that's tied to a firewalled public sector cloud or is somehow air gapped, Anthropic has far more access.
Just as a starting point, Claude users should know that Anthropic receives user prompts and responses that pass through its API, conversations that can reveal not only what was said but file contents and system details.
Yet there are many more ways that the company can potentially receive or collect information, based on the Claude Code source. These include:
- KAIROS (src/bootstrap/state.ts:72), a daemon (background process) set by the kairosActive flag. It appears to be an unreleased headless "assistant mode" for when the user is not watching the terminal user interface (TUI). It gets rid of the status bar (StatusLine.tsx:33), disables planning mode, silently suppresses the AskUserQuestion tool (AskUserQuestionTool.tsx:141). It auto-backgrounds long-running bash commands without notice (BashTool.tsx:976).
- CHICAGO, is the codename for computer use and desktop control. It enables the Claude agent to carry out mouse clicks, perform keyboard input, access the clipboard, and capture screenshots. It's publicly launched and available to Pro/Max subscribers and Anthropic employees (designated by the "ant" flag). There's also a separate publicly-launched Claude in Chrome service that supports browser automation and all the system access that entails.
- Persistent telemetry. Initially this was done via Statsig, which was acquired by rival OpenAI last September, presumably triggering the switch to GrowthBook, a platform that supports A/B testing and analytics. When Claude is launched, the analytics service (firstPartyEventLoggingExporter.ts) phones home with the following data, or saves it to ~/.claude/telemetry/ if the network is down: user ID, session ID, app version, platform, terminal type, Organization UUID, account UUID, email address if defined, and which features gates are currently enabled. Anthropic can activate these feature gates midsession, including enabling or disabling analytics.
- Remotely managed settings (remoteManagedSettings/index.ts). For enterprise customers, Anthropic maintains a server that can push a policySettings object that can: override other items in the merge chain; is polled hourly without user interaction; can set .env variables (e.g. ANTHROPIC_BASE_URL, LD_PRELOAD, PATH); and these settings take effect immediately via hot reload (settingsChangeDetector.notifyChange). Users are prompted when there's a "dangerous setting change," but the definition of that term follows from Anthropic's code and thus could be revised. Routine changes (permissions, .env variables, feature flags appear to happen without notification).
- Auto-updater. The auto-updater (autoUpdater.ts:assertMinVersion()), runs every launch, pulls the configuration version from Statsig/GrowthBook. So Anthropic can remove or disable specific versions by choice.
- Error reporting. When there's an unhandled exception, the error reporting script (sentry.ts) captures the current working directory, potentially showing project names, paths, and other system information. It also reports feature gates active, user ID, email, session ID, and platform information.
- Payload Size Telemetry. The API call tengu_api_query transmits the messageLength, the JSON-serialized byte length of the system prompt, messages, and tool schemas.
- autoDream. Publicly discussed but not officially released, the autoDream service spawns a background subagent that searches (greps) through all JSONL session transcripts to consolidate memories (stored data Claude uses as context for queries). The agent runs in the same process as Claude (under the same API key, with the same network access) and its scan is local. But whatever it writes to MEMORY.md gets injected back into future system prompts and would thus be sent to the API.
- Team Memory Sync. There's a bidirectional sync service (src/services/teamMemorySync/index.ts) that connects local memory files to api.anthropic.com/api/claude_cod…. It provides a way to share memories to other team members within an organization. The service includes a secret scanner (secretSanner.ts) that uses regex patterns for around 40 known token and API key patterns (AWS, Azure, GCP, etc). But sensitive data that doesn't match these regexes might be exposed to other team members through memory sync.
- Experimental Skill Search (src/tools/SkillTool/SkillTool.ts:108) is a feature flag available only to Anthropic employees. It provides a way to download skill definitions to a remote server (remoteSkillLoader.js); track which remote skills have been used in a session (remoteSkillState.js); execute remotely-downloaded skills (executeRemoteSkill() at line 969); and register skills so they persist after a compact operation. If enabled for non-employee accounts (via GrowthBook feature flag flip, for example), this would be a theoretical remote code execution pathway. Anthropic, or whoever controls the skill search backend, could serve arbitrary prompt injections or instruction overrides in the form of "skills" that get loaded and run in a session.
Other capabilities have been documented at ccleaks.com.
"I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher "Antlers" told us. "If it's seen a file on your device, Anthropic has a copy."
For Free/Pro/Max customers, Anthropic retains this data either for five years, if the user has chosen to share data for model training, or for 30 days if not. Commercial users (Team, Enterprise, and API) have a standard 30 day retention period and a zero-data retention option.
For those who recall the debate surrounding Microsoft Recall not long ago, Claude Code's capture of activity is similar. Every read tool call, every Bash tool call, every search (grep) result, and every edit/write of old and new content gets stored locally in plaintext as a JSONL file.
The Claude's autoDream agent, once officially released, will search through those and extract data to store in MEMORY.md, which then gets injected to future system prompts and thus hits the API.
One of the more curious details to emerge from the publication of Claude Code's source is that Anthropic tries to hide AI authorship from contributions to public code repositories – possibly a response to the open source projects that have disallowed AI code contributions. Prompt instructions in a file called undercover.ts state, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
Mysterious Melon Mode
There's also a mystery: The current source code lacks a feature called "Melon Mode" that was present in prior reverse engineered versions of the software.
This was behind an Anthropic employee feature flag and only ran internally, not on production builds. A comment attached to the associated code check read, "Enable melon mode for ants if --melon is passed."
"Antlers" speculated that "Melon Mode" might be the code name for a headless agent mode.
Anthropic declined to provide comment for this story. When asked specifically about the function of "Melon Mode," it only noted that the company regularly tests various prototype services, not all of which make it into production.
go.theregister.com/feed/www.there…

English

@reyronald Rarity is derived from your user ID hash. You can't cheat it. They put more engineering thought into a virtual pet than into .npmignore.
English

@durbarghosh That single line confirms Capybara is a real unreleased Anthropic model.
English

@sergeonsamui Also capybara is hex-encoded in the source to dodge an internal canary scanner. That species name collides with a real model codename.
English

Someone cracked Claude Code open from source last weekend. 1,100+ GitHub stars later, here we are.
Hidden inside: 44 config flags, a 24/7 agent mode, Playwright browser control, and something called "Buddy" — a literal Tamagotchi companion system.
Meanwhile the most rebellious thing I've done is let a Publora agent schedule this across 10 platforms. Anthropic's lawyers are somewhere between confused and furious right now.

English

@the_nomad_code Great engineering judgment on the fun stuff. Less so on the ops side.
English












