poolside

61 posts

poolside banner
poolside

poolside

@poolsideai

We build models for agentic coding and long-horizon tasks. Try Laguna: https://t.co/setRB1C2wb

Katılım Mayıs 2023
2 Takip Edilen5.6K Takipçiler
poolside retweetledi
Joey
Joey@aijoey·
Laguna XS.2 NVFP4 by @poolsideai on DGX Spark via vLLM ; thinking on vs off Same prompt, streamed side by side. Thinking ON: - TTFT: 52.7ms - 616 completion toks - 40.9 tok/s - First visible: 4.20s - Total: 15.05s Thinking OFF: - TTFT: 53.3ms - 449 completion toks - 40.5 tok/s - First visible: 53ms - Total: 11.10s The hidden reasoning cost ~4s before the first visible token showed up. Decode speed was basically identical either way~41 tok/s. If you're building for latency, that's the thinking premium. Seems to be somewhere between Qwen3.5 and Qwen3.6.
English
3
1
22
2.4K
poolside
poolside@poolsideai·
@johjeff hey @johjeff thanks for flagging this, just opened DMs for you! very keen to see how you got Laguna working with OpenCode CLI 👀
English
0
0
1
13
Jeffrey C. Johnson
Jeffrey C. Johnson@johjeff·
@poolsideai Your website said to DM you on X. I can't find the DM option on you X profile. I added Poolside Laguna to OpenCode CLI and wanted to share the process so maybe you can add it to your documentation, support or forum somewhere. X replies are too short.
English
1
0
1
34
poolside retweetledi
poolside retweetledi
vLLM
vLLM@vllm_project·
Laguna-XS.2 Recipe is live — dedicated Docker image (`vllm/vllm-openai:laguna`) and vLLM nightly already support Laguna-XS.2. Reasoning + tool-call parsers wired up out of the box. Thanks again to @RedHat_AI team and the @poolsideai team for landing this so fast. Get started ⬇️ 🔗 recipes.vllm.ai/poolside/Lagun…
vLLM tweet media
English
1
10
51
4.9K
poolside retweetledi
poolside retweetledi
Eiso Kant
Eiso Kant@eisokant·
Today we’re shipping Laguna M.1 and Laguna XS.2 – our first public models. We’re also shipping our agent harness and a preview product experience. Both models were trained from scratch on our own stack: data pipelines, training infrastructure, and agent RL.
English
37
69
508
78.2K
poolside retweetledi
Jason Warner
Jason Warner@jasoncwarner·
Today @poolsideai is releasing Laguna M.1 & Laguna XS.2, our latest generation models and first public models We started Poolside because we believed that to build truly capable coding agents, you need to own the full stack: data, training, reinforcement learning, inference. These models are the first result of that work, and we’re making them available to everyone
English
37
40
377
48.6K
poolside retweetledi
almonk
almonk@almonk·
Today we're launching our first public Poolside models: Laguna M.1 and Laguna XS.2, and we've built ❈Shimmer, an instant-on VM sandbox with Poolside Agent pre-installed so you can try them out. Go play out our new models for free, and build something fun → shimmer.run
English
19
32
236
45.8K
poolside retweetledi
Pengming Wang
Pengming Wang@PengmingWang·
Today we’re releasing Laguna M.1 and Laguna XS.2, our first public models. Laguna XS.2 is our first open-weight release, with weights available today on Hugging Face: huggingface.co/poolside/Lagun… A few details on what went into them: large-scale pre-training, data mixture optimization, synthetic data, optimizer efficiency, and async agent RL.
English
11
26
225
19.9K
poolside retweetledi
poolside retweetledi
OpenRouter
OpenRouter@OpenRouter·
The first public foundation models from @poolsideai just dropped on OpenRouter! Laguna M.1 and Laguna XS.2. Built from scratch for agentic coding and long-horizon work. Free for a limited time ⬇️
OpenRouter tweet media
English
20
28
349
55.2K
poolside retweetledi
Baseten
Baseten@baseten·
Laguna XS.2 has landed, and it’s live on Baseten. We’ve baked in inference optimizations so you can deploy @poolsideai's open-weight agentic coding model in production right away. Laguna M.1 is also available on Baseten for teams serving Poolside's most capable model to date. To run Laguna on Baseten, we use: → the Baseten Inference Stack → optimized model serving for low-latency inference → production-grade infrastructure for fast, reliable deployment Try it now: baseten.co/library/laguna…
Baseten tweet media
English
2
7
51
3.5K
poolside
poolside@poolsideai·
We’d love feedback from the developer and research community as we keep improving Laguna XS.2 and the stack around it. Laguna XS.2 has launch-day support in Transformers, vLLM, TRT-LLM, MLX/mlx-lm, and Ollama. Huge thanks to @huggingface, @vllm_project, @nvidia, @Prince_Canuma, @ollama, @baseten, and @OpenRouter for helping make this launch possible and get XS.2 into the hands of the community on day one!
English
1
6
53
4.1K