Tarun Chitra
27.2K posts

Tarun Chitra
@tarunchitra
ヽ(⌐■_■)ノ♪♬ @gauntlet_xyz/@robotventures/@aerafinance/@thelatestindefi/@_choppingblock/@zeroknowledgefm // main: @guilleangeris

Many people have asked me why Morpho has not been more aggressive in positioning our model against existing ones over the past month. Recent events have made the case for Morpho obvious: permissionless, isolated lending markets are the only architecture that can scale onchain finance safely. That said, crypto sadly has a reputation of being brutal, toxic, and full of grave dancing. But this is not us. It’s not the brand we want to build with Morpho. Nor is it the impression we want people to have of crypto. A brand is defined by the actions you take, in the eyes of the people who matter most to you. And most people who matter to us already understand why Morpho is different… and if they did not, we explained in private, not on twitter. We have deep confidence in ourselves, model, and our mission. We don't need to tear others down to prove it.

Which are the most common everyday phenomena that we don't properly understand? Off the top of my head: • Lightning (how does it happen?) • Sleep; dreams (why do they exist?) • Glass (thermodynamics of formation) • Turbulence (when does it start?) • Morphogenesis (how does a creature know what should go where?) • Rain (it seems to start faster than models would predict) • Ice (dynamics of slipperiness) • Static electricity (which material will donate electrons?) • General anaesthetic. (And the mechanism of a lot of drugs, e.g. paracetamol.)

First OpenClaw. Now Hermes 😭 Hermes agent (open source, months old): - reviews past conversations - builds skills from experience - persistent cross-session memory - gets smarter the longer it runs Claude "dreaming" (shipped today): - reviews past sessions - extracts patterns - persistent memory - gets smarter over time next on the watchlist: whatever Hermes or Openclaw ships in Q3.

Fully homomorphic encryption was invented in the 1980s. Why wasn't it adopted sooner? A 100,000x slowdown, driven by memory boundedness. Ajay Joshi from @CipherSonicAI explains how his team got it down to less than 2x. (if this pattern sounds familar... LLM inference is memory-bound too. It's why wafer-scale exists.)



D² brings together research on DeFi protocol design, incentives, and market behavior Across two days, the program spans: → DeFi Microstructure → Perpetual Futures & Derivatives → Mechanism Design → Prediction Markets → AMMs Learn more: designingdefi.xyz


Happy to announce that Hermes Agent's repo just surpassed Anthropic's Claude Code repo



23 years old with no advanced mathematics training solves Erdős problem with ChatGPT Pro. "What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”-Terence Tao scientificamerican.com/article/amateu…




In surprising news, GPT5.4 Pro just found a solution Erdos Problem #1196. Now Gauss has formalized the proof of #1196! The initial proof was 7.2K lines of Lean, done in ~5 hours. Subsequent golfing has compressed it down to 4K lines. (sorry-free, with comparator check)


