DigitalOcean

81.5K posts

DigitalOcean banner
DigitalOcean

DigitalOcean

@digitalocean

AI-Native Cloud. ☁️ Status: @DOstatus Support: https://t.co/5gkvyinPlK

CO Katılım Ocak 2012
197 Takip Edilen221.4K Takipçiler
DigitalOcean
DigitalOcean@digitalocean·
@piq9117 Hi there, we’re sorry about the error you’re facing. Could you please share the ticket number if you’ve already created one for this, or DM us your email address so we can create a ticket on your behalf and have our team look into this? We’re here to help!
English
0
0
0
10
DigitalOcean retweetledi
NVIDIA Data Center
NVIDIA Data Center@NVIDIADC·
Huge milestone for @digitalocean — and a great example of close collaboration with hardware-software co-design at every layer. NVIDIA Blackwell Ultra + NVFP4 + optimized @vllm_project stack for speculative decoding with @inferact and the open source community → #1 performance on Artificial Analysis for leading frontier models. More to come as this ecosystem scales.
DigitalOcean@digitalocean

Among the fastest DeepSeek V3.2, MiniMax-M2.5, and Qwen 3.5 397B inference in the market, per Artificial Analysis benchmarks (April 2026). ⚡️🤖 Sub-1-second TTFT. 230 tokens per second. Co-designed every layer of the stack with @Inferact, performance optimized @vllm_project, all on @NVIDIA HGX B300. Live on DigitalOcean Serverless Inference now. Full breakdown in the comments. ⬇️

English
5
27
124
25.8K
DigitalOcean
DigitalOcean@digitalocean·
@nininitom Here's your ticket# 12127887, please keep an eye on your inbox for any updates associated to the issue you're experiencing.
English
0
0
0
19
Tom Tom Tom
Tom Tom Tom@nininitom·
Hey @DOStatus , I want to create a droplet but I get this instead
Tom Tom Tom tweet media
English
1
0
0
1.1K
DigitalOcean
DigitalOcean@digitalocean·
@nininitom Appreciate you for providing us the email address. We'll go ahead a create a proactive ticket for you and route this to relevant team, you can expect a reply sooner than anticipated.
English
0
0
0
20
DigitalOcean
DigitalOcean@digitalocean·
DeepSeek-V4-Pro is available on DigitalOcean. 🌊 Bring planning-first intelligence to your apps with @deepseek_ai's latest frontier model. ✅ 1M token context window ✅ Built for agentic reasoning and multi-setep tasks ✅Run alongside your apps & data on DigitialOcean ✅Pay only for what you use. Scale your AI systems without the infra headache.
English
1
4
17
5.1K
DigitalOcean
DigitalOcean@digitalocean·
@loftwah Thank you for this. We have sent a DM, requesting for more information. Thank you for your patience in this matter.
English
0
0
1
22
Loftwah
Loftwah@loftwah·
@digitalocean ref:!00Df2018t5m.!500QP01RKKXE:ref is the ticket number.
English
1
0
0
37
Loftwah
Loftwah@loftwah·
Hey @digitalocean any reason I would be getting billed for an account I deactivated months ago now?
English
2
0
7
1.5K
DigitalOcean
DigitalOcean@digitalocean·
@stecal12 We do applogize for this. We have sent you a DM requesting your account details, so we can make a ticket for the Support Team on your behalf. Thank you for your patience in this matter.
English
0
0
0
11
Subject 89P13
Subject 89P13@stecal12·
@digitalocean Thanks for punting the problem back to the customer. How about I just move to Vultr instead? Isn't the problem already described in enough detail for you to take ownership and address the issue? Using a gmail account, the sending issue is on you.
English
1
0
0
21
Subject 89P13
Subject 89P13@stecal12·
@digitalocean Email for 2FA isn't the best idea in the world, if that is your only option for this. Can't do passcode or SMS, or just let me use my password (my security, my choice). 10 minutes on and still waiting for the code.
English
1
0
0
18
DigitalOcean retweetledi
Paddy Srinivasan
Paddy Srinivasan@paddix·
We just set the bar for inference performance. Fastest DeepSeek, MiniMax & Qwen on Blackwell Ultra. #1 output speed. Sub-1s latency on @digitalocean's AI-Native Cloud Platform as validated by @ArtificialAnlys 230 tok/s on DeepSeek V3.2 which is 3.9x faster than AWS Bedrock and Sub-1s TTFT. This is what we unveiled at Deploy earlier this week: 👉 An AI-Native Cloud built for the Inference Era 👉 A unified Inference Engine — speed, cost, simplicity Red the article below to understand how we engineered this with techniques inlcuding Tensor Parallelism, Kernel Fusion, Programmatic Dependent Launch, Speculative Decoding and Multi-Token Prediction (MTP) including working closely with the creators of vLLM at @inferact digitalocean.com/blog/how-we-bu…
English
7
15
77
6K
DigitalOcean
DigitalOcean@digitalocean·
@qencode We're so grateful to have spent the day together, @qencode and thanks for sharing the excitement! 🌊
English
0
0
0
48
Qencode
Qencode@qencode·
We spent the day at @digitalocean Deploy in SF, where DO just launched 15 new products and several updates to expand their cloud inference infrastructure. Congratulations to all the teams involved in putting on this phenomenal event! #DODeploy #DigitalOceanDeploy26
English
1
0
3
85
Simon Mo
Simon Mo@simon_mo_·
Top speed, backed by open source @vllm_project, on @nvidia Blackwell Ultra, with the @inferact team, on @digitalocean inference.
vLLM@vllm_project

🏆 vLLM powers the fastest inference on NVIDIA Blackwell Ultra on Artificial Analysis. On @digitalocean's Serverless Inference, powered by vLLM on NVIDIA HGX B300: 🥇 AA #1 output speed for DeepSeek V3.2 (230 tok/s, 0.96s TTFT) and Qwen 3.5 397B 🔧 MiniMax-M2.5: 23% TPOT gain via an EAGLE3 draft model trained on TorchSpec Co-design highlights: - NVFP4 quantization on Blackwell Ultra - EAGLE3 + MTP speculative decoding - Per-model kernel fusion Thanks to @digitalocean, @nvidia, and @inferact for the collaboration. Optimizations land back in open-source vLLM. 🔗 digitalocean.com/blog/how-we-bu…

English
3
6
46
6.9K
vLLM
vLLM@vllm_project·
⚡Congrats to @digitalocean for leaderboard topping fastest speed! @vllm_project open source community is excited to showcase the capabilities of our engine on @NVIDIAAI Blackwell Ultra and powering production AI together!
DigitalOcean@digitalocean

Among the fastest DeepSeek V3.2, MiniMax-M2.5, and Qwen 3.5 397B inference in the market, per Artificial Analysis benchmarks (April 2026). ⚡️🤖 Sub-1-second TTFT. 230 tokens per second. Co-designed every layer of the stack with @Inferact, performance optimized @vllm_project, all on @NVIDIA HGX B300. Live on DigitalOcean Serverless Inference now. Full breakdown in the comments. ⬇️

English
1
6
47
4.9K
DigitalOcean
DigitalOcean@digitalocean·
@vllm_project This is what open collaboration looks like in practice. Optimizations built together, landed back in open-source vLLM. Proud to build with the @vllm_project team and @nvidia. The numbers speak for themselves. 💯🤝🌊
English
0
0
1
239
DigitalOcean retweetledi
vLLM
vLLM@vllm_project·
🏆 vLLM powers the fastest inference on NVIDIA Blackwell Ultra on Artificial Analysis. On @digitalocean's Serverless Inference, powered by vLLM on NVIDIA HGX B300: 🥇 AA #1 output speed for DeepSeek V3.2 (230 tok/s, 0.96s TTFT) and Qwen 3.5 397B 🔧 MiniMax-M2.5: 23% TPOT gain via an EAGLE3 draft model trained on TorchSpec Co-design highlights: - NVFP4 quantization on Blackwell Ultra - EAGLE3 + MTP speculative decoding - Per-model kernel fusion Thanks to @digitalocean, @nvidia, and @inferact for the collaboration. Optimizations land back in open-source vLLM. 🔗 digitalocean.com/blog/how-we-bu…
vLLM tweet mediavLLM tweet mediavLLM tweet media
English
4
27
159
50.6K
DigitalOcean
DigitalOcean@digitalocean·
Among the fastest DeepSeek V3.2, MiniMax-M2.5, and Qwen 3.5 397B inference in the market, per Artificial Analysis benchmarks (April 2026). ⚡️🤖 Sub-1-second TTFT. 230 tokens per second. Co-designed every layer of the stack with @Inferact, performance optimized @vllm_project, all on @NVIDIA HGX B300. Live on DigitalOcean Serverless Inference now. Full breakdown in the comments. ⬇️
English
1
8
31
31.3K