Liam Liang Ding

485 posts

Liam Liang Ding banner
Liam Liang Ding

Liam Liang Ding

@liangdingNLP

building agentic ai @AlibabaGroup & ex-@AIstartup @JD_Corporate @TencentGlobal @Sydney_Uni. opinions are my own.

Sydney, New South Wales เข้าร่วม Ekim 2018
1.9K กำลังติดตาม842 ผู้ติดตาม
Enze Xie
Enze Xie@xieenze_jr·
Milestone alert 🎉 My first-authored paper SegFormer (NeurIPS 2021) just hit 10,000+ citations! This early ViT-based semantic segmentation project started as an internship work at NVIDIA back in 2020. We had a rocky submission journey—initial reviewer scores were underwhelming, and it took multiple rounds of rebuttal and revision to finally get it accepted. Six years later, watching it gain this much traction feels surreal. Fun bonus: I’m still at NVIDIA today, keeping the research fire burning. Grateful for every citation and all peer support. Onward and upward for more impactful work!
Enze Xie tweet media
English
7
1
164
11.1K
Haiyang Xu
Haiyang Xu@xuhaiya2483846·
🔥Tongyi Lab releases Mobile-Agent-v3.5,20+SOTA GUI benchmarks: (1) GUI automation, 56.5OSWorld, 71.6AndroidWorld, and48.4WebArena; (2) Grounding, 80.3ScreenSpotPro; (3) tool-calling , 47.6OSWorld-MCP;(4) MemoryKnowledge, 75.5GUI-KnowledgeBench. 🌐Project:github.com/X-PLUG/MobileA…
Haiyang Xu tweet mediaHaiyang Xu tweet mediaHaiyang Xu tweet mediaHaiyang Xu tweet media
English
2
3
10
543
Jungo Kasai (Kotoba)
Jungo Kasai (Kotoba)@jungokasai·
Simultaneous translation has been a dream of mine since my PhD days in 2017, when I learned how to write papers. As a startup founder, I learned how to build great demos. Now I’m shipping models to our products within days, straight into users’ hands. What a time to live in!
Kotoba@kotoba_tech

AI同時通訳の体験が、また大きく進化しました。 Kotoba Techの音声基盤モデル「Koto v1.0」を「同時通訳」アプリにリリースしました🚀 、スマホ一台でポケットの中にパーソナルな同時通訳者がいるような体験に近づいています。 Kotoは音声をテキストを介さずに直接音声に翻訳するend-to-endというコンセプトで開発された生成AIモデルで、非常に低遅延で高精度な同時通訳体験を提供することができます。 今回のアップデートでは特に英語から日本語の性能が大幅に向上し、Kotoba社のベンチマーク上では精度が50%以上の向上が見られ、遅延も平均で1秒以上短縮されました。 「同時通訳」アプリを使うことで、スマホ一台でますます皆様のポケットに専属の同時通訳がいるような体験を味わうことが可能です。 「同時通訳」は無料でお試しいただけ、有料プランの前に追加で3日間の無料トライアルを行うことができます。Kotoが皆様のパーソナル同時通訳としてお力になれる事を願っています。

English
1
0
20
2.5K
Liam Liang Ding
Liam Liang Ding@liangdingNLP·
Setting temperature = 0 ❄️🇮🇸
Liam Liang Ding tweet mediaLiam Liang Ding tweet mediaLiam Liang Ding tweet mediaLiam Liang Ding tweet media
English
0
0
4
138
MikaStars★
MikaStars★@MikaStars39·
The white dog is coming! Happy to share that we have released JoyAI-LLM Flash via JD OpenSource—a state-of-the-art instruction model based on the Mixture-of-Experts (MoE) architecture. Model weights are now available on @huggingface! 🤗Huggingface (instruct model): huggingface.co/jdopensource/J… 🤗Huggingface (base model): huggingface.co/jdopensource/J… The power of JoyAI-LLM Flash stems from deep innovations across the entire training pipeline: 1. Pre-training: Advanced MoE Architecture JoyAI-LLM Flash utilizes a sophisticated Mixture-of-Experts (MoE) architecture with 256 experts (8 selected per token) and MLA attention to achieve high-density knowledge representation across a 128K context window while maintaining extreme inference efficiency. 2. Mid-training: Domain-Specific Enhancement During mid-training, we curated and synthesized massive, high-quality datasets focusing on coding, agentic reasoning, and long-context understanding to significantly bolster the model's specialized capabilities. 3. Post-training: Expert Alignment The model undergoes large-scale SFT and DPO to ensure seamless alignment with complex human instructions and robust autonomous problem-solving in agentic environments. 4. Fiber Bundle RL: Introduces fiber bundle theory into reinforcement learning, proposing a novel optimization framework, FiberPO. This method is specifically designed to handle the challenges of large-scale and heterogeneous agent training, improving stability and robustness under complex data distributions. Deployment With MTP, we achieved a 1.3× to 1.7× throughput increase over standard versions, and the model is fully optimized for production-ready deployment via vLLM and SGLang. The technical report is coming soon!
MikaStars★ tweet media
English
24
46
277
53.5K
Liam Liang Ding
Liam Liang Ding@liangdingNLP·
Milan moments🇮🇹
Liam Liang Ding tweet mediaLiam Liang Ding tweet media
English
3
0
4
141