
Shanda Group
137 posts

Shanda Group
@shanda_group
Shanda is a private investment firm owned by husband and wife team, Tianqiao Chen and Chrissy Luo.


Final chapter of Memory Genesis 2026 is here. April 4 · Computer History Museum · Mountain View Finalist teams demo live. 30+ judges score in real time. $80K+ in prizes. Keynote on agents, reasoning & memory. 3 expert panels. Awards ceremony. The future of AI memory gets built on stage. luma.com/v5rfi2zu

Getting ranked #1 Paper of the Day in huggingface is nice. But that’s not the point. The point is this: Most AI today is optimized for conversation. Very little of it is built to solve real problems. And the gap is not just scale. It’s verification. Without verification, AI doesn’t produce answers. It produces guesses that sound convincing. That’s the problem we’ve been working on with MiroThinker. •long-horizon reasoning •structured planning •and verification at both local and global levels Not a chatbot. But a system designed to actually work — in science, finance, and other high-stakes domains. Still early. Still imperfect. But this is a direction we believe matters. Curious how others are thinking about verification in AI systems. lnkd.in/ggpcPYRK #AI #AgenticAI #Reasoning #AIResearch #MiroThinker

MSA (Memory Sparse Attention) → scaling to 100M tokens → addressing the long-standing trade-offs between scalability, precision, and efficiency What this points to is not just longer context, but a shift from “context as input” to memory as a system. This is still early, but if this direction holds, it could fundamentally change how we think about intelligence in AI. This work comes from EverMind — our sister team to MiroMind, focused on long-term memory. lnkd.in/gAxneARM #AI #Memory #LongContext #Architecture #EverMind #MiroMind


Big news: three world-class scientists join MiroMind's core research team. 🔬 Dr. @SimonShaoleiDu (ex-xAI, FAIR, Princeton) → Reasoning Models & Training 🤖 Prof. Bo An (NTU) → Runtime & Agent Systems ✅ Dr. @KaiyuYang4 (ex-Meta FAIR, Caltech) → Verifiable AI Lab Our goal isn't to build a more eloquent AI. It's to build one that's provably right. 📎 [prnewswire.com/news-releases/…]

🚀 Introducing MiroThinker-1.7 & MiroThinker-H1 Today, we release the latest generation of our research agent family: MiroThinker-1.7 and MiroThinker-H1. Our goal is simple but ambitious: move beyond LLM chatbots to build heavy-duty, verifiable agents capable of solving real, critical tasks. Rather than merely scaling interaction turns, we focus on scaling effective interactions — improving both reasoning depth and step-level accuracy. Key highlights: 🧠 Heavy-duty reasoning designed for long-horizon tasks 🔍 Verification-centric architecture with local and global verification 🌐 State-of-the-art performance on BrowseComp / BrowseComp-ZH / GAIA / Seal-0 research benchmarks 📊 Leading results across scientific and financial evaluation tasks Explore MiroThinker: Hugging Face: huggingface.co/collections/mi… Github: github.com/MiroMindAI/Mir…




AGI isn’t about sounding right. It’s about being right in the real world. I frame this as “Liberal Arts LLM” (simulation) vs “Science LLM” (discovery)—where causality + verification loops matter most. 🎥 youtu.be/IFR6zeM5tUQ?si… 📄 Text version: lnkd.in/gB_GNaMi










