

Alex
3.5K posts

@pilouanic
@clarnium_io co-founder, blockchains researcher & project growth adviser, founder of @wobblytimer & https://t.co/mTf42AZqfm



The co-founder of OpenAI just built an entire AI training engine in 200 lines of code. No dependencies. No libraries. No frameworks. Pure Python. And he says he cannot make it any shorter. Andrej Karpathy — former Director of AI at Tesla, founding member of OpenAI, one of the most respected AI researchers alive — published microgpt on February 12, 2026. It is 200 lines. It trains and runs a GPT model completely from scratch. Here is what those 200 lines actually contain. A full dataset loader. A tokenizer. An autograd engine that computes gradients. A GPT-2 architecture neural network. The Adam optimizer. A complete training loop. A complete inference loop. Everything needed to build, train, and run a large language model — in a file you could print on two pages of paper. This is the culmination of a decade-long obsession. Karpathy previously built micrograd, makemore, and nanoGPT — each one a step toward stripping AI down to its mathematical skeleton. microgpt is the final answer. The irreducible core. He wrote: "This script is the culmination of multiple projects and a decade-long obsession to simplify LLMs to their bare essentials. I cannot simplify this any further." Here is why this matters beyond the elegance.Every AI course in the world teaches through abstraction. You use PyTorch. You import transformers. You call functions you do not understand. You build things without knowing how they work. Karpathy's entire career has been a war against that approach. He believes the only way to truly understand intelligence — artificial or otherwise — is to build it from nothing .200 lines. No dependencies. From nothing. For anyone who has ever wanted to understand what a large language model actually is — not what it does, but what it is — this file is the answer. Free. Open source. On GitHub right now. gist.github.com/karpathy/8627f…
















.@arbitrum Security Council took emergency action to freeze 30,766 ETH held at the Arbitrum One address linked to the @KelpDAO exploit. The key technical point is how this was executed: it was not a normal transfer signed by the exploiter's key. Based on the on-chain trace, this appears to have been executed from Ethereum (L1) via governance-level emergency upgrade powers. The Upgrade Executor temporarily upgraded DelayedInbox, invoked a temporary entrypoint to enqueue a delayed L1→L2 message via Bridge.enqueueDelayedMessage(kind=3, ...), and then restored the original implementation. The critical logic change was that the sender input shifted from the standard msg.sender path to a caller-controlled parameter (then transformed via L1→L2 aliasing), allowing the injected message to carry exploiter-linked sender context. Also, kind=3 maps in Nitro to L1MessageType_L2Message, which allows L2MessageKind_UnsignedUserTx execution on L2, i.e., this path does not require a user signature check. So the L2 transaction view (“from exploiter to 0x…0DA0”) reflects a chain-level forced state transition, not a standard user-signed transfer. TX on L1: app.blocksec.com/phalcon/explor… TX on L2: app.blocksec.com/phalcon/explor…


This 30-min workshop by the creator of Claude Code will teach you more about vibe-coding than 100 YouTube video guides. Bookmark it & give it 30 minutes today. This video will change the way you use Claude forever.

