Andy Z
302 posts

Andy Z
@AndyZ_Tech
Tech-stock investor/analyst since 1998. Portfolio Manager & former banker. NYC roots, SoCal life. Yankees & Giants loyalist. Opinions my own.






Two Turing-class AI researchers just raised $2B in three weeks to bet against every LLM company on the planet. Fei-Fei Li closed $1B for World Labs on February 18. LeCun closed $1.03B for AMI Labs today. Both building world models. Both arguing that the entire generative AI paradigm is a statistical parlor trick. And the investor overlap tells you this is coordinated conviction, not coincidence. Nvidia backed both. So did Sea and Temasek. The math on AMI is absurd. $3.5B pre-money valuation. Four months old. Zero product. Zero revenue. The CEO said on the record that AMI won’t ship a product in three months, won’t have revenue in six, won’t hit $10M ARR in twelve. He described it as a long-term scientific endeavor. Investors gave him a billion dollars anyway. This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics. A language model that can write poetry is worthless to a robotics company trying to predict what happens when a mechanical arm applies 12 newtons at a 30-degree angle to a flexible surface. LeCun raided his own lab to build this. Mike Rabbat, Meta’s former research science director. Saining Xie from Google DeepMind. Pascale Fung, senior director of AI research at Meta. He walked into Zuckerberg’s office in November, told him he was leaving, and four months later half of FAIR works for him. Meta is reportedly partnering with AMI anyway, which means Zuckerberg thinks LeCun might be right even while Meta keeps scaling Llama. AMI’s first partner is Nabla, a medical AI company, building toward FDA-certifiable agentic AI. That’s the use case that makes world models existential. LLMs hallucinate. In healthcare, hallucinations kill people. You can’t prompt-engineer your way out of a model that generates statistically plausible text when you need a system that actually understands how a human body works. Two billion dollars in three weeks. Two of the most credentialed researchers alive. And a thesis that says the $100B+ already poured into scaling LLMs is optimizing the wrong architecture entirely. If they’re wrong, investors lose money. If they’re right, every company building on top of GPT and Claude for physical-world applications just bought the wrong foundation.

Gemini 3.1 Flash-Lite is the fastest and most cost-efficient Gemini 3 series model⚡️ It outperforms 2.5 Flash with a 2.5X faster Time to First Answer Token and a 45% increase in output speed, at a fraction of the cost of larger models!





Fix your OpenClaw with this one simple trick

craziest thing ive ever read School is way worse for kids than social media open.substack.com/pub/unpublisha…


In this amazing multidisciplinary collaboration, we report our early experience with the @openclaw ->




$50B of Indian IT services market value was eroded in the last 30 days. The Citrini article predicts it will collapse even more. Niftya IT index: -15% Wipro: -25% Infosys: -25% TCS: -17% Cognizant: -24% HCL: -17% Accenture: -25% Capgemini: -30% LTI Mindtree: -25% TechMahindra: -18% Mphasis: -20% Palantir claims it can compress complex SAP ERP migrations (ECC to S4) from years to 2 weeks. GCCs (companies owning their own offshore IT departments in India) with Claude Cowork are far more ecomical than multi year IT services contracts. I do think the 18% rupee collapse is exaggerated though. The IT services business model absolutely breaks at the current capability of AI tooling, and its ~10% of Indian GDP.



after a week upgrading and tweaking my @openclaw, here's what changed. this article I wrote is still the best starting point, I used it myself to set mine up. it gives you a step by step guide for a safe, locked-down openclaw. minimum risk. but if you want a useful one, you need to go further. here's what I did: 1/ switched to GLM-5 via @Zai_org yearly pro plan ($250/yr). benchmarks comparable to Opus 4.5. they give you an API key that plugs straight into openclaw. flat cost, no token monitoring. 2/ installed Claude Code + Happy Coder: you can code on your Mac Mini from your phone. Separate from OpenClaw but part of the overall setup. 3/ it builds tools and projects overnight based on our conversations, then presents them in my 7am morning briefing on Telegram 4/ it's accumulating a knowledge library from every research session. the more it knows, the better the next session gets. it remembers everything. I genuinely see my bot getting sharper every day. it's starting to understand what I actually want before I ask. biggest lesson from this week: I learned more by actually setting this up and breaking things than from all the X articles I read before starting. if you think you don't have the technical knowledge to run one, you're wrong. I didn't either. just start.






Apple has just published a paper with a devastating title: *The Illusion of Thinking*. And it's not a metaphor. What it demonstrates is that the AI models we use every day - yes, ones like ChatGPT - don't think. Not one bit. They just imitate doing so. Let me explain: 🧵👇



