OpenBMB

832 posts

OpenBMB banner
OpenBMB

OpenBMB

@OpenBMB

OpenBMB (Open Lab for Big Model Base) aims to build foundation models and systems towards AGI. Connect with us: https://t.co/N9pevTnoOa

Katılım Şubat 2022
266 Takip Edilen7.1K Takipçiler
Sabitlenmiş Tweet
OpenBMB
OpenBMB@OpenBMB·
🚀 VoxCPM 2 is live! 🎉 Another open-source AI #TTS model from China — and one that stands shoulder to shoulder with Qwen3-TTS, while bringing everything into a single unified model. After rapid iterations from V1 (zero-shot cloning) to V1.5 (long-form + fine-tuning), #VoxCPM has consistently pushed quality and usability forward. Now, VoxCPM 2 takes it further: 🔹30+ languages — truly global, truly local. 🔹Infinite voice design — type it, hear it, control it. From a whisper to a booming cinematic voice. 🔹Studio-grade audio — 48kHz ultra-high fidelity with emotional depth 🔹Diffusion-Autoregressive cloning — preserves more acoustic and emotional detail than token-based models like Qwen3-TTS 💡 Big shoutout to @grok — used your multi-image video magic for our launch demo. It’s scarily good at keeping visuals consistent across shots. Elon @elonmusk, this one’s for you. 😉 Check the demo & start cloning your dream voice: 🌐 Hugging Face Space: huggingface.co/spaces/openbmb… 🤗 Hugging Face Model: huggingface.openbmb.com/model/openbmb/… 🤖 ModelScope Model: modelscope.cn/models/OpenBMB… 💻 GitHub:github.com/OpenBMB/VoxCPM/ #TTS #AI #VoiceCloning #GrokImagine #ElonMusk #OpenBMB #VoxCPM
English
143
98
475
550.9K
OpenBMB
OpenBMB@OpenBMB·
Excited to partner with OpenMAIC to bring VoxCPM2 into real classroom scenarios. Voice cloning meets interactive learning — and it just clicks.😍 @OpenMAIC #VoxCPM2 #VoiceAI #EdTech #AI
Jifan Yu@yujifan_0326

Can you imagine Elon Musk explaining quantum mechanics to you in his unmistakable voice? Or Guo Degang breaking down AI with his signature cross-talk delivery? Yeah, that kind of immersive experience you used to only daydream about — it’s real now! @OpenBMB 📢 Dream Collab! OpenMAIC × VoxCPM2 just dropped something huge: Immersive voice-powered learning is officially LIVE today! We’ve brought OpenBMB’s VoxCPM2 into OpenMAIC, straight-up injecting cutting-edge voice cloning tech right into interactive learning scenarios. 🎙️ One-Click Replication — Cosplay Anything with Sound The open-source self-hosted version is now deeply locked in with VoxCPM2. Just type in a quick prompt and you can nail the signature voices of all kinds of heavyweights. Whether it’s pro-level commentary from an industry expert or drama-queen-style narration, it’s all effortlessly doable. 🗣️ Say Goodbye to ‘Robotic AI’ — NPC Personas Maxed Out Sick of those soulless, emotionless voices from traditional TTS systems? The era of robotic monotone is OVER! The system can now auto-match the perfect vocal style based on your preset personas — the dignified tone of an academic legend, the patient guidance of a gentle TA, the comedic flair of a fun influencer… Silky smooth voice switching, so natural you’d think there’s a real person sitting right across from you. No more nodding off in class! 💡 Heads Up: These two voice features are currently exclusive to the open-source version! ⏳ Live Demo is in the works You’ll be able to access these god-tier features online very soon. Stay tuned! 🚀 Come disrupt traditional learning with us: 🏠 Home: open.maic.chat/home ⭐️ GitHub: github.com/THU-MAIC/OpenM…

English
1
0
4
528
yutou
yutou@wayutou121621·
Real-time, full-duplex AI is finally here. MiniCPM-o 4.5 isn’t just another model — it’s a shift in interaction paradigm. 👀
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
1
0
1
84
OpenBMB
OpenBMB@OpenBMB·
@kate_osita_ Really appreciate this — you captured it perfectly 😊 Enabling fluid, non-blocking interaction is exactly what we’re aiming for with Omni-Flow. Excited to see this direction resonate, especially for open-source edge AI.
English
0
0
0
62
KATE
KATE@kate_osita_·
Most multimodal models still have to pause one sense to activate another. MiniCPM-o 4.5 doing full duplex interaction natively through the Omni-Flow Framework means it actually feels like a fluid conversation. Achieving SOTA performance on an open source edge AI model is a massive step forward.
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
1
46
39
15.6K
OpenBMB
OpenBMB@OpenBMB·
@dee_naliaks Thanks for your sharing 😍 Proactive interaction is a key step toward more natural human-AI interaction — not just responding, but knowing when to act. Exciting to see this working well on local setups too.
English
0
0
2
69
Doreen
Doreen@dee_naliaks·
I think the proactive interaction is the most interesting part here. Instead of just waiting for a prompt, the model can actually observe and initiate. Getting this level of omni-modal SOTA performance on local hardware is seriously impressive.
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
1
45
43
15.8K
OpenBMB
OpenBMB@OpenBMB·
@n__deborah This means a lot 😚 Real-time, continuous interaction is exactly what we’ve been pushing for with Omni-Flow. Great to see open-source models closing the gap.
English
0
0
0
29
Deborah
Deborah@n__deborah·
We have been waiting for open source to crack true real time interaction. The Omni Flow protocol letting the model maintain perception while generating output is exactly what we needed. Beating GPT-4o-mini on audio and speech benchmarks proves the gap between local models and closed APIs just got a lot smaller.
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
1
20
18
15.4K
OpenBMB
OpenBMB@OpenBMB·
@JakesBiko Great take ☺️ Efficiency-first and edge deployment are exactly what we’re optimizing for — making multimodal AI actually usable in real-world settings. Glad this comes through.
English
0
0
0
52
Jakes
Jakes@JakesBiko·
MiniCPM-o 4.5 positions itself as an efficiency-first omni model. With real-time decision making and tightly aligned multimodal streams, it’s aiming at practical edge deployment rather than just benchmark chasing.
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
3
3
6
5.4K
OpenBMB
OpenBMB@OpenBMB·
@ElCopyMaster Really appreciate this — love how you framed it 🙏 Aligning multimodal streams on a single timeline is exactly the direction we’re pushing toward. Exciting to see full-duplex interaction becoming real.
English
0
0
1
53
Rafa Gonzalez | IA
Rafa Gonzalez | IA@ElCopyMaster·
MiniCPM-o 4.5 feels like a real step toward native omni-intelligence. Omni-Flow aligns video, audio, text, and speech on one live timeline, so the model can keep seeing, hearing, and speaking without blocking the interaction. The performance edge is just as impressive: 0.109 error rate on OmniDocBench, 54.4% win rate on LiveSports-3K-CC, and local deployment on Mac M-series / Windows with 12GB VRAM. Full-duplex AI is getting serious.
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

English
6
41
48
10.1K
OpenBMB
OpenBMB@OpenBMB·
@berryxia Really appreciate the detailed breakdown 🙏 Glad to see the real-time multimodal interaction getting attention 👀 Looking forward to more feedback from the community!
English
0
1
1
165
Berryxia.AI
Berryxia.AI@berryxia·
🚀 OpenBMB 重磅发布!MiniCPM-o 4.5 技术报告来了! 9B 参数多模态模型实现突破性实时交互: • Omni-Flow 框架:原生全双工流式交互(视频+音频+文本毫秒级同步) • Native Full-Duplex + Proactive Interaction(无需VAD,可主动提醒) • 多模态基准接近 Gemini 2.5 Flash(MMBench 87.6 / MathVista 80.1) • 极致 Edge AI:完全离线运行,一键安装 Windows / macOS(12G VRAM 即可) • 100% 本地隐私 + 免费社区 API 真正的实时、自然人机交互时代来了!🔥 技术报告、GitHub、HuggingFace、Web Demo 已开放。
Berryxia.AI tweet media
OpenBMB@OpenBMB

🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM

中文
1
8
25
6.4K
OpenBMB
OpenBMB@OpenBMB·
The heart of real-time interaction is efficiency. ⚡️
OpenBMB tweet media
English
0
1
10
692
OpenBMB
OpenBMB@OpenBMB·
🚀 🚀Excited to announce the technical report of MiniCPM-o 4.5! MiniCPM-o 4.5 transitions #AI interaction from traditional turn-based processing to a real-time, native full-duplex stream-based paradigm. 🌊 The Omni-Flow Framework Instead of traditional VAD-based workarounds, we introduce the #Omni-#Flow framework. This unified stream paradigm aligns video, audio, and text on a synchronized millisecond timeline. • Native Full-Duplex: Simultaneous perception and response. • Proactive Interaction: Natively manages turn-taking without external VAD, supports proactive reminding. 📉 9B Scale, SOTA Performance MiniCPM-o 4.5 demonstrates SOTA multimodal intelligence at its scale: • Multimodal Benchmarks: Comparable to #Gemini 2.5 Flash on MMBench EN (87.6) and MathVista (80.1). • Streaming Evaluation: 54.4% win rate on LiveSports-3K-CC, surpassing specialized models. 💻 The Ultimate Edge AI — Fully Functional without Network Connection We are providing one-click installers for Windows (12G VRAM,RTX 5070) and macOS (M1-M5 Max/ M5 Pro). • Local API Support: Deploy your own inference server to integrate native full-duplex into custom apps. • Free Access: We are offering free community API services for exploration. • 100% Private: Your data never leaves your machine. Deploy in under 10 minutes. 🛠️👇 👐 Join the Open Future The weights are open. The protocol is public. 📄 Technical Report: github.com/OpenBMB/MiniCP… 💻 GitHub: github.com/OpenBMB/MiniCP… 🤗 HuggingFace: huggingface.co/openbmb/MiniCP… 🌐 Web Demo: openbmb.github.io/MiniCPM-o-Demo/ #MiniCPMo #OpenSourceAI #EdgeAI #MachineLearning #ComputerVision #LLM
OpenBMB tweet mediaOpenBMB tweet mediaOpenBMB tweet media
English
14
47
194
146.7K
OpenBMB
OpenBMB@OpenBMB·
6/6 Join the Paradigm Shift 🌟 OpenMAIC is reimagining education from "passive listening" to "active exploring." Dive into the new v0.2.1 experience today: 🏠 Home: open.maic.chat/home ⭐️ GitHub: github.com/THU-MAIC/OpenM…
English
0
0
2
238
OpenBMB
OpenBMB@OpenBMB·
5/6 Whiteboard & UX Refinement 🖌️ We’ve significantly optimized the whiteboard layout to reduce element overlap. We believe in transparency with our community — it’s an ongoing challenge, and you can read our full Tech Report here: github.com/THU-MAIC/OpenM…
English
1
0
1
247
OpenBMB
OpenBMB@OpenBMB·
OpenMAIC v0.2.1 is LIVE! 🚀 We’ve been watching the community closely since the v0.2.0 launch, and the creativity has been mind-blowing. 🤯 You asked for a way to share your "playable knowledge" with the world. We listened. Here’s what’s new in our latest sprint: 👇
English
2
2
11
784
OpenBMB
OpenBMB@OpenBMB·
AI agents struggle with long-horizon tasks because their memory rules are rigid and hand-crafted (like summarizing every N steps). What if agents could learn exactly when and how to manage their memory? 🤔 Today, we dive into AtomMem—a novel approach by @TsinghuaNLP (OpenBMB member) alongside researchers from Renmin University of China. This paper transforms agentic memory from static pipelines into a learnable, dynamic decision-making process. 🤗 Paper: huggingface.co/papers/2601.08… 📄 arXiv: arxiv.org/abs/2601.08323 💻 Code: github.com/RUCBM/AtomMem Why it matters: 1️⃣ From Static to Dynamic: Instead of "one-size-fits-all" rules, AtomMem deconstructs memory management into atomic CRUD (Create, Read, Update, Delete) operations. Agents autonomously decide what to keep, fetch, modify, or forget based on the task context at hand. 🧠 2️⃣ Reinforcement Learning Powered: Using GRPO, the agent learns an end-to-end task-aligned policy. It discovers structured memory strategies natively rather than relying on human priors, bringing an average performance boost of ~9%! 🔄 3️⃣ Hybrid Retrieval Mechanism: It combines a deterministic "scratchpad" for tracking global state with selective, query-based semantic retrieval for a vector database. The model perfectly balances short-term tracking and long-term knowledge! 🧭 4️⃣ SOTA on Long-Context & Web Tasks: AtomMem consistently outperforms static memory methods across HotpotQA, 2WikiMultihopQA, Musique, GAIA, and WebWalkerQA. It even remains robust when scaling up to 800 noisy documents, conquering information overload! 🚀 AtomMem breaks the shackles of fixed memory pipelines, granting AI agents true autonomy over their knowledge. Read the full paper to see how dynamic memory evolves! #AI #THUNLP #OpenBMB #LLM #Agents #ReinforcementLearning #MachineLearning
OpenBMB tweet mediaOpenBMB tweet mediaOpenBMB tweet media
English
1
10
56
3.5K
OpenBMB retweetledi
Jifan Yu
Jifan Yu@yujifan_0326·
Remember that DNA Simulation? Meet OpenMAIC Deep Interactive Mode, turning passive slides into playable knowledge. 16000 stars ⭐️ special release! Powered by our new GenUI Engine, a single click transforms any lesson into 5 interactive dimensions: 🧬 Simulation — Drag & interact with real processes 🗺️ Mind Maps — Visualize knowledge structures dynamically 🔬 3D Visualization — Rotate & explore from molecules to the cosmos 🎮 Gamified Challenges — Learn through play, not rote memorization 💻 Online Programming — Write & execute code in real time, zero setup The magic? Zero coding needed. Describe your idea in natural language, and a full interactive lesson generates in just moments. From passive listening to active exploring. Dive in and build the future of education with us. 🔗 Live Demo: open.maic.chat ⭐️ GitHub: github.com/THU-MAIC/OpenM…
English
1
12
876
80.4K
OpenBMB
OpenBMB@OpenBMB·
Retrieval-Augmented Generation (RAG) mitigates hallucinations, but what happens when key evidence is scattered across multiple noisy documents? LLMs easily get lost and fail at complex multi-hop reasoning. 🤔 Today, we dive into GraphAnchor—a novel approach by @TsinghuaNLP (OpenBMB member) alongside researchers from Northeastern University and BUPT. This paper reconceptualizes graph structures from static knowledge representations into active, evolving indices for RAG. 🤗 Paper: huggingface.co/papers/2601.16… 📄 arXiv: arxiv.org/abs/2601.16462 💻 Code: github.com/NEUIR/GraphAnc… Why it matters: 1️⃣ From Static to Dynamic: Traditional RAG uses graphs as passive information filters. GraphAnchor flips this by creating a dynamically evolving graph. During iterative retrieval, it constantly "anchors" key entities and relations, building a structured index on the fly. 🕸️ 2️⃣ Information Sufficiency Judgment: The anchored graph acts as a navigation tool. It guides the LLM to explicitly reason using tags, evaluate knowledge sufficiency with , and generate precise follow-up queries with . The model always knows what it has and what it lacks! 🧭 3️⃣ Linearized & Incremental: LLMs read text, not node visualizations. GraphAnchor linearizes the graph into a readable text format (entities + relation triplets). Instead of rebuilding the graph every time, it performs incremental updates condition-based on previous reasoning paths, drastically saving overhead. 📝 4️⃣ SOTA Multi-Hop Performance: GraphAnchor dominates complex QA tasks! On Qwen2.5-7B, it achieves an average F1 of 48.63 and an Exact Match of 38.00 across four multi-hop benchmarks—outperforming the strong baseline (DeepNote) by over +5 points. 🚀 GraphAnchor turns knowledge graphs from passive containers into active reasoning navigators. Read the full paper to see how we conquer multi-hop RAG! #AI #THUNLP #OpenBMB #LLM #RAG #KnowledgeGraphs #MachineLearning
OpenBMB tweet mediaOpenBMB tweet mediaOpenBMB tweet media
English
1
3
9
689