
كيف الحال
3.6K posts













Anthropic accidentally leaked Claude Code's entire source code today. 512,000 lines. 45+ hidden features. The full agent harness. I took it, combined it with 6 other AI tools, and built a local coding lab that runs offline on my Mac. Cost: $0/month. Here's everything. — The Leak — It happened via an npm source map file in v2.1.88. Security researcher Chaofan Shou found it and posted on X. Within hours — 41,500+ forks, 30,000+ stars, thousands of devs dissecting it. What they found inside: codename "Capybara" for Claude 4.6, an "Undercover Mode" for stealth open-source contributions, ULTRATHINK deep reasoning, KAIROS autonomous daemon that runs 24/7, VOICE_MODE push-to-talk, and 40+ more hidden features disabled in the public release. Anthropic called it "human error." By then, the code was everywhere. — The Fork — A dev named paoloanzn took the leaked source and did three things: removed ALL telemetry (OpenTelemetry, Sentry, GrowthBook, session fingerprinting), stripped the injected safety guardrails Anthropic adds on top of the model's own training, and unlocked all 45+ experimental feature flags. The result: "free-code." Same Claude Code. Zero callbacks home. Every hidden feature active. One binary. — The Brain — A harness without a model is a fancy terminal. I needed a local model that fits in 24GB. Best open-source coding model in 2026: Devstral Small 2 by Mistral AI. 24B parameters, #1 on SWE-bench Verified for its size, purpose-built for coding agents, Apache 2.0, 14GB at 4-bit quantization. Since January 2026, Ollama exposes an Anthropic-compatible API. The leaked Claude Code talks to local models with ZERO code changes — just set ANTHROPIC_BASE_URL to localhost. File editing, search, bash, git — all working. Fully offline. Free. — The Self-Improving Agent — Then I added Hermes Agent by Nous Research. Unlike Claude Code, Hermes LEARNS. When it solves a hard problem, it writes a reusable "skill document" so it never has to solve it again. 70+ built-in tools, multi-level memory, MIT licensed, runs anywhere. Gets smarter every session — not through fine-tuning, but structured knowledge accumulation. — The Orchestrator — MCO sends the same task to multiple agents in PARALLEL and compares solutions. Two agents working simultaneously on the same problem. You pick the best answer — or let MCO synthesize a consensus. — The Skills Layer — Everything Claude Code (ECC) won the Claude Code Hackathon in Feb 2026. 48 skills, 30 sub-agents, 60 commands, continuous learning system. Plugs into Claude Code, OpenCode, free-code, and Cursor. I linked Hermes's skill library into it so every agent benefits from what any agent learns. — Performance — MLX (Apple's native framework) gives 20-30% faster inference than Ollama on Apple Silicon. LM Studio adds speculative decoding for another 1.5-2x speed boost — a small draft model predicts tokens, the main model just verifies. — The Final Stack — On my M4 Pro with 24GB RAM: MCO (orchestrator) → fans out to multiple agents free-code (leaked fork, 45+ features) + Hermes (self-improving) + OpenCode (open source) + Claude (official, paid) ECC (48 skills, 30 agents, 60 commands) 9 MCP servers (GitHub, filesystem, web search, browser automation, memory) Ollama / MLX / LM Studio Devstral Small 2 (24B) + Qwen 3.5 9B (Arabic) All local. All offline. All free. — Is It As Good? — For 80% of tasks: yes. The harness IS Claude Code — literally the same source. Devstral handles file edits, bug fixes, code generation, git workflows, and explanations extremely well. Where paid Opus still wins: complex 10+ file refactors, deep architectural reasoning, broad world knowledge. For everything else? $0 beats $200/month. — Security Warning — Between 00:21-03:29 UTC today, some GitHub forks were injected with a malicious axios package containing a RAT. Use paoloanzn/free-code specifically and verify your axios version. — What's Next — The future of AI coding isn't a $200/month subscription. It's an open-source stack running on your own hardware, learning from your own work, getting better every day. Today Anthropic accidentally proved it. Built on March 31, 2026. The day Anthropic open-sourced their crown jewel — by accident.










A French woman recounts the Israeli genocide in Gaza: 'We found a mass grave in Gaza containing 300 people. Small children were killed with their hands tied behind their backs... and they claim that Israeli soldiers are the most moral in the world!'


Reports coming from Tehran that US/Israel has struck a rescue site while rescue workers were trying to clear the rubble of a site previously struck by US/Israel. There is nothing that is more characteristic of Israeli warfare than killing medics and rescue works.




