SGLang

26 posts

SGLang

SGLang

@sgl_project

SGLang project https://t.co/2wrCfYIlBz. This is an alias account for SGLang, please follow @lmsysorg

Palo Alto Katılım Mayıs 2025
8 Takip Edilen1.5K Takipçiler
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
SGLang now runs natively on TPU with a new pure Jax backend! SGLang-Jax leverages SGLang's high-performance server architecture and uses Jax to compile the model's forward pass. By combining SGLang and Jax, it delivers fast, native TPU inference while maintaining support for advanced features like continuous batching, prefix caching, parallelism, speculative decoding, and highly optimized TPU kernels. Learn more in the blog below👇
LMSYS Org tweet media
English
4
21
104
59.3K
SGLang retweetledi
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
⚡ Zero-overhead scheduler for speculative decoding ⚡ When your GPUs are running LLM inference, unoptimized software will waste a huge amount of time on CPU overhead - such as kernel launch and metadata bookkeeping. SGLang has been pioneering the zero-overhead CPU runtime for LLM runtime since last year. Now, we also carefully tune the scheduler for speculative decoding and seeing 10% - 20% speedup across the board. This improvement has been tested by the @googlecloud vertex AI team and we welcome more people to join our development. See the roadmap below ⬇️
LMSYS Org tweet media
English
1
21
125
51.4K
SGLang retweetledi
Lianmin Zheng
Lianmin Zheng@lm_zheng·
This feature has finally been merged into the main branch. The team has been battling the PyTorch memory allocator and CUDA stream management for ages to iron out all the dependencies and race conditions.
LMSYS Org@lmsysorg

⚡ Zero-overhead scheduler for speculative decoding ⚡ When your GPUs are running LLM inference, unoptimized software will waste a huge amount of time on CPU overhead - such as kernel launch and metadata bookkeeping. SGLang has been pioneering the zero-overhead CPU runtime for LLM runtime since last year. Now, we also carefully tune the scheduler for speculative decoding and seeing 10% - 20% speedup across the board. This improvement has been tested by the @googlecloud vertex AI team and we welcome more people to join our development. See the roadmap below ⬇️

English
5
17
218
23.3K
SGLang retweetledi
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚀 SGLang Model Gateway v0.2 Drops 🚪 SGL-Router pioneered cache-aware routing last year. Now, it is fully rebuilt and renamed as “SGLang Model Gateway” - with extreme performance and much more features. Core upgrades: - Multi-Model Inference Gateway (IGW) Mode: Run multi-model fleets under one gateway—custom policies, health checks, load balancing, and flexible prefill-decode disaggregation. - Rust gRPC Powered: Bypass slow Python and HTTP runtime, extreme fast streaming, OpenAI-compatible APIs, cached tokenization! 🔥 - Pluggable Storage & MCP: Flexible history (memory/oracle) + seamless tool integration + response API. - Reliability Boost: Retries, metrics, tracing—all in. Your unified control plane for reasoning agents & enterprise LLMs. Backward compatible—easy migration! This is a huge contribution from the @Oracle team, led by Simo @hello_slin, Chang @ccskookie, Keyang @key4ng.
LMSYS Org tweet mediaLMSYS Org tweet media
English
6
18
102
35.9K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
Join PyTorch conference today to learn more about the latest progress from SGLang. - Optimize long-tail and MoE challenges in RL - General large scale inference optimization and deployment
PyTorch@PyTorch

The Reinforcement Learning track at #PyTorchCon highlights new directions for RL with #PyTorch. Hear Chenyang Zhao (UCLA) on optimizing long-tail and MoE challenges in RL with SGLang, and Daniel Han (Unsloth) on maximizing luck in reinforcement learning. 🔗 Explore sessions: hubs.la/Q03NCZZS0

English
0
3
15
2.9K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
We're excited to announce the collaboration between KTransformers and SGLang! KTransformers has been a killer for local AI inference with its system-algorithm co-design, often showing 5x - 10x speedup. This integration equips SGLang with KTransformers’ inference strategy and optimized kernels, specifically optimized for MoE models. Combined with SGLang’s native multi-GPU scaling, the solution can be seamlessly extended to serve much larger workloads. ⬇️ Learn more in our tech blog below
LMSYS Org tweet mediaLMSYS Org tweet media
English
1
15
86
29.3K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
Exciting updates on DGX Spark: Now you can run gpt-oss-20b at 70 tokens/s with SGLang! This is 1.4x faster than what we got in our blog last week. We worked with the @NVIDIAAIDev team to fix a bunch of Triton and quantization issues. Cannot wait to see how much performance we can get from this tiny computer. Usage: download the lmsysorg/sglang:spark docker image and launch with python3 -m sglang.launch_server --model openai/gpt-oss-20b
LMSYS Org tweet mediaLMSYS Org tweet media
English
11
19
149
35.2K
SGLang retweetledi
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
🙌 We love seeing these performance gains of gpt-oss-20b at 70 tokens/s with SGLang (@lmsysorg) on NVIDIA DGX Spark. 👇
LMSYS Org@lmsysorg

Exciting updates on DGX Spark: Now you can run gpt-oss-20b at 70 tokens/s with SGLang! This is 1.4x faster than what we got in our blog last week. We worked with the @NVIDIAAIDev team to fix a bunch of Triton and quantization issues. Cannot wait to see how much performance we can get from this tiny computer. Usage: download the lmsysorg/sglang:spark docker image and launch with python3 -m sglang.launch_server --model openai/gpt-oss-20b

English
1
14
99
9.9K
SGLang retweetledi
Lianmin Zheng
Lianmin Zheng@lm_zheng·
1.4x speedup after one week of release!
LMSYS Org@lmsysorg

Exciting updates on DGX Spark: Now you can run gpt-oss-20b at 70 tokens/s with SGLang! This is 1.4x faster than what we got in our blog last week. We worked with the @NVIDIAAIDev team to fix a bunch of Triton and quantization issues. Cannot wait to see how much performance we can get from this tiny computer. Usage: download the lmsysorg/sglang:spark docker image and launch with python3 -m sglang.launch_server --model openai/gpt-oss-20b

English
4
7
148
14.5K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚀 SGLang In-Depth Review of the NVIDIA DGX Spark is LIVE! Thanks to @NVIDIA’s early access program, SGLang makes its first ever appearance in a consumer product, the brand-new DGX Spark. The DGX Spark’s 128GB Unified Memory and Blackwell architecture set a new standard for local AI prototyping and edge computing. We're thrilled to bring these cutting-edge performance insights and software support to the developer community. Our review dives into how to efficiently deploy and accelerate large models like Llama 3.1 70B, GPT-OSS using SGLang's EAGLE3 speculative decoding and @Ollama on this beautiful piece of engineering. 👇 Unboxing video and tech blog in the thread #SGLang #NVIDIA #SparkSomethingBig #Blackwell #DGXSpark #AIInference #LLMServing
LMSYS Org tweet media
English
19
62
340
410.4K
SGLang retweetledi
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
⚡️ Big update from Kimi K2! 256k context, Stronger coding & tool-calling, Smoother agent integration. Already tested with SGLang runtime — stable 60-100+ TPS with turbo API! 👉 Check it out: huggingface.co/moonshotai/Kim…
Kimi.ai@Kimi_Moonshot

Kimi K2-0905 update 🚀 - Enhanced coding capabilities, esp. front-end & tool-calling - Context length extended to 256k tokens - Improved integration with various agent scaffolds (e.g., Claude Code, Roo Code, etc) 🔗 Weights & code: huggingface.co/moonshotai/Kim… 💬 Chat with new Kimi K2 on: kimi.com ⚡️ For 60–100 TPS + guaranteed 100% tool-call accuracy, try our turbo API: platform.moonshot.ai

English
3
15
118
18.6K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚀Summer Fest Day 3: Cost-Effective MoE Inference on CPU from Intel PyTorch team Deploying 671B DeepSeek R1 with zero GPUs? SGLang now supports high-performance CPU-only inference on Intel Xeon 6—enabling billion-scale MoE models like DeepSeek to run on commodity CPU servers. Key highlights: 1. Full CPU backend for SGLang with Intel AMX 2. Native BF16 / INT8 / FP8 support for both Dense and Sparse FFNs 3. 6–14× TTFT and 2–4× TPOT speedup vs. llama.cpp 4. 85%+ memory bandwidth efficiency with optimized MoE kernels 5. Flash Attention V2 + MLA + MoE all optimized for CPU 6. Multi-NUMA parallelism mapped from GPU-style Tensor Parallelism This work is now fully upstreamed to SGLang main—read how we achieved it, and how far you can go without a GPU 👇 #LLMInfra #ModelServing #MoE #Xeon6 #SGLang #FP8 #INT8 #CPUInference
LMSYS Org tweet media
English
6
16
39
19K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚨SGLang Summer Fest Bonus Drop🚨 Proud to share a joint effort from Mooncake by @Kimi_Moonshot, @Oracle , and SGLang: Kimi K2 trillion-scale deployment—running on 128 H200 GPUs sponsored by @NVIDIAAIDev DGX Cloud. OME + SGLang = MoE inference at production scale.👇
LMSYS Org tweet media
English
5
24
113
25.1K
SGLang retweetledi
NVIDIA Data Center
NVIDIA Data Center@NVIDIADC·
Proud to support this lightning-fast launch⚡️ ️ Accelerated through #NVIDIADGX Cloud and in partnership with Moonshot AI, @SGLang, and @Oracle Open Model Engine, we helped bring Kimi K2 to customers just days after its debut. Now, organizations can “Think Smart” and scale MoE inference with frontier performance. Explore how SGLang unlocked production-scale deployment ⤵️
LMSYS Org@lmsysorg

🚨SGLang Summer Fest Bonus Drop🚨 Proud to share a joint effort from Mooncake by @Kimi_Moonshot, @Oracle , and SGLang: Kimi K2 trillion-scale deployment—running on 128 H200 GPUs sponsored by @NVIDIAAIDev DGX Cloud. OME + SGLang = MoE inference at production scale.👇

English
2
14
79
9.4K
SGLang retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚀 Introducing SpecForge – our open-source framework for speculative decoding training, built for SGLang and Eagle3. Train draft models that just work—scalable, efficient, and inference-ready. Supports LLaMA 4, DeepSeek, MoE, FSDP, TP & more. Up to 2.18× speedup. Huge thanks to our infra partner @VoltagePark, whose mission is to be a catalyst for innovation by democratizing access to high-performance AI infrastructure. Their support enabled us to train and evaluate large-scale speculative decoding models efficiently and reliably. We also would like to express our heartfelt gratitude to the Eagle3 team @hongyangzh, and LinkedIn Infra team @LinkedIn. Let’s build the future of fast LLMs together! #opensource #LLM #AI #SpeculativeDecoding
LMSYS Org tweet media
English
4
9
30
8.6K