Hyperstack

3.1K posts

Hyperstack banner
Hyperstack

Hyperstack

@Hyperstackcloud

The Full Stack AI Cloud. Secure Private Cloud, on-demand NVIDIA VMs, and AI Studio - built for teams that don't compromise.

London, United Kingdom Katılım Haziran 2023
96 Takip Edilen650 Takipçiler
Sabitlenmiş Tweet
Hyperstack
Hyperstack@Hyperstackcloud·
Today, Hyperstack EU1 (Sweden) comes online with NVIDIA RTX PRO™ 6000 Blackwell Server Edition capacity available on demand and via reservation. EU1 expands Hyperstack’s European footprint with infrastructure optimised for high-end visualisation, GPU virtualisation, and large-scale inference workloads. EU1 has been designed with future private deployments in mind. The region is designed with the power, cooling, and data centre capacity required for next‑generation NVIDIA platforms, with reserved space to scale NVIDIA B300 deployments as Hyperstack Secure Private Cloud comes online. More details on Secure Private Cloud will follow. EU1 (Sweden) Now Live. 👉 Enquire about EU1 capacity and early Secure Private Cloud discussions: bit.ly/4rSeYsK
Hyperstack tweet media
English
0
2
7
141
Hyperstack
Hyperstack@Hyperstackcloud·
@Kimi_Moonshot Come try out Kimi K2.6 on Hyperstack with access to all the latest GPU's for a fraction of the price Hyperscalers are offering
English
0
0
0
27
Hyperstack
Hyperstack@Hyperstackcloud·
@ElijahTzanevv Three rooms. One day. The AI infrastructure story just got a lot more complicated. What does the H200 clearance actually change for teams building right now?
English
1
0
2
17
Eli's AI Daily
Eli's AI Daily@ElijahTzanevv·
Today the AI industry answered for itself in three different rooms. Here is what actually happened and why none of it is getting the analysis it deserves. The chip deal. Trump flew Jensen Huang to Beijing on Air Force One as a last-minute addition. Hours after landing, Washington cleared H200 chip sales to Alibaba, Tencent, ByteDance and JD.com. Four years of tightening export controls. Traded for a 25% tariff and a diplomatic photo. The H200 is two generations behind @nvidia's current Blackwell architecture. Washington gave China access to yesterday's chip while keeping today's locked. Huang has been saying for months that restricting China simply accelerates Huawei's domestic ecosystem. Washington appears to have accepted that argument. The question is whether they accepted it because it was right — or because Huang was on the plane. The conflict of interest: A court filing published this morning reveals Sam Altman holds over $2 billion in personal stakes in companies tied to OpenAI. $1.7B in Helion Energy. Which has a power agreement with Microsoft. Which owns 49% of OpenAI. $633M in Stripe. Which processes payments for OpenAI ecosystem companies. Smaller positions in Cerebras, Lattice, and Humane — all in OpenAI's supply chain. Altman holds zero direct equity in OpenAI itself. His entire financial upside runs through the supply chain he invested in before becoming CEO. The SEC, the House Oversight Committee, and every IPO underwriter reviewing a prospectus at $1 trillion are now working from the same spreadsheet. The closing arguments: The Musk v. OpenAI trial wraps today. The most damaging testimony didn't come from Musk. It came from Altman's own former colleagues. Former chief scientist Ilya Sutskever: Altman had a "consistent pattern of lying." Former CTO Mira Murati: his leadership created chaos. Satya Nadella: the board firing was "amateur city." The pattern across all three: The AI industry is no longer just building products. It is navigating geopolitics, securities law, and corporate governance simultaneously. The companies that manage all three at once will define the next decade. What is the angle on today's news that you think the mainstream coverage missed? Join my newsletter in my bio to get this news before everyone else on a daily basis! #AIWars #MuskvsAltman
English
1
0
3
68
Hyperstack
Hyperstack@Hyperstackcloud·
@Dr_Singularity This is clear in the market right now, come grab your GPU's from Hyperstack while you still can
English
0
1
4
124
Dr Singularity
Dr Singularity@Dr_Singularity·
we're early Jensen Huang: "Agentic AI requires 1000x more compute than generative AI"
Dr Singularity tweet media
English
65
94
754
38.3K
Hyperstack
Hyperstack@Hyperstackcloud·
295 billion parameters. 21B active per token. 600 GB BF16 checkpoint, too large for a single node. We deployed Hy3-preview on Hyperstack using multi-node Kubernetes with 16 NVIDIA H100s across two worker nodes, hybrid Tensor + Expert Parallelism and a 600 GB BF16 checkpoint loaded from local NVMe. In this tutorial: → Multi-node Kubernetes cluster on Hyperstack (two 8x H100-80G PCIe-NVLink) → LeaderWorkerSet API for coordinated 2-node inference → vLLM with native multi-node tensor parallelism and MTP speculative decoding → 256K token context window with three reasoning tiers (no_think / low / high) → Multi-agent code review pipeline with parallel specialist agents and tool calling → Plugging into Claude Code, OpenClaw, and OpenCode as a local backend 80.6 on SWE-Bench Verified. 34.86 on LiveCodeBench v6. Full tutorial on the blog: Deploy Hy3-preview on Hyperstack: A Multi-Node Kubernetes Guide #Hyperstack #Hy3preview
English
0
0
3
89
Hyperstack
Hyperstack@Hyperstackcloud·
One model. Video, audio, images, and documents - from a single endpoint. We deployed NVIDIA Nemotron 3 Nano Omni on Hyperstack and put its multimodal pipeline to work. In this tutorial: → vLLM serving on a single NVIDIA H100 80GB (62 GB BF16 checkpoint) → 256K token context window with native reasoning mode → PDF extraction - structured JSON from complex financial documents → Hour-long audio transcription with word-level timestamps and action-item extraction → Video summarisation and temporal Q&A from a single prompt → Disabling thinking mode for latency-sensitive tasks 67.04 on OCRBenchV2. 89.39 on VoiceBench. 72.2 on Video-MME. One deployment. Full tutorial on the blog: bit.ly/4duBhjd #Nemotron #MultimodalAI
English
0
1
7
98
Hyperstack
Hyperstack@Hyperstackcloud·
1.6 trillion parameters. 49B active per token. Too large for a single node. We deployed DeepSeek-V4-Pro on Hyperstack using multi-node Kubernetes - 16 NVIDIA H100s across two worker nodes, hybrid Data + Expert Parallelism, and a 960 GB FP4+FP8 checkpoint loaded from local NVMe. In this tutorial: → Multi-node Kubernetes cluster on Hyperstack (2x 8x NVIDIA H100-80G PCIe-NVLink) → LeaderWorkerSet API for coordinated 2-node inference → vLLM with hybrid DEP topology and MTP speculative decoding → 1M token context window with three reasoning tiers → Long-horizon autonomous code refactoring with self-correction → Plugging into Claude Code, OpenClaw, and OpenCode as a local backend 80.6 on SWE-Bench Verified. 93.5 on LiveCodeBench v6. Full tutorial on the blog: bit.ly/4f1jamb #DeepSeek #AgenticAI
English
0
2
9
114
Hyperstack
Hyperstack@Hyperstackcloud·
Running Kubernetes or SLURM in-house is a full-time job. Hyperstack Managed Cluster Platform hands you a fully managed cluster environment - delivered at the orchestrator layer, so your team focuses on models, not maintenance. GPU infrastructure. Fully managed. Ready to scale. Enquire now 👉 bit.ly/3QPhHp8 #ManagedKubernetes #SLURM
Hyperstack tweet media
English
0
2
5
55
Hyperstack
Hyperstack@Hyperstackcloud·
1 trillion parameters on 8 GPUs. Here's what that looks like. We deployed Kimi K2.6 on Hyperstack - @Kimi_Moonshot's open-weight agentic model. In this video: → vLLM serving on 8x NVIDIA H100-80G PCIe → 595 GB of INT4 weights loaded from ephemeral NVMe in ~6 minutes → Autonomous multi-step refactoring with self-correction → Coding-driven design - single prompt to working website → Local backend for Claude Code, OpenClaw and Kimi Code CLI 32B active parameters per token. 256K context window. 300 sub-agents in a single run. Full tutorial on our blog: bit.ly/4cFJVLF #KimiK2 #MoonshotAI
English
0
1
9
237
Hyperstack
Hyperstack@Hyperstackcloud·
(🧵6 of 7)
Hyperstack tweet media
1
0
2
23
Hyperstack
Hyperstack@Hyperstackcloud·
Most inference bottlenecks aren't model problems. They're engineering problems - and they have well-understood solutions. We broke down the 5 techniques that close the gap between a validated model and a production-ready deployment. The difference between a model that works and one that runs efficiently at scale is in the implementation details. Scroll down for the specifics 👇 #MLOps #Inference (🧵1 of 7)
Hyperstack tweet media
English
1
1
8
74
Hyperstack
Hyperstack@Hyperstackcloud·
We deployed Qwen 3.6 on Hyperstack and turned it into a fully autonomous coding agent. What you'll see in this video: → vLLM server running on 8x NVIDIA H100 PCIe GPUs → 262K token context window → The model organising files on its own using MCP tools → Building and saving a website autonomously → Plugging into Claude Code, OpenClaw and Qwen Code as a local backend 35B total parameters, 3B active per token. That's what Mixture-of-Experts buys you - large-model intelligence at small-model speed. Full step-by-step tutorial on the blog: bit.ly/4mQ4jNj #Qwen3 #AgenticAI
English
1
2
8
104
Hyperstack retweetledi
NexGen Cloud
NexGen Cloud@CloudNexgen·
Behind the walls of @Hyperstackcloud EU1. EU1 has been engineered with the power density, cooling infrastructure and physical capacity to support next-generation NVIDIA platforms at scale and we're pleased to share that all phase one capacity is now reserved. We're working closely with @glesysab to bring additional NVIDIA Blackwell and NVIDIA Blackwell Ultra capacity online later this year. We're currently taking registrations of interest for upcoming capacity, speak to us about your requirements today. Email: sales@hyperstack.cloud
English
0
3
8
155
Hyperstack
Hyperstack@Hyperstackcloud·
The teams shipping production AI right now aren't just picking GPUs. They're asking: how do I manage infrastructure without slowing down engineers? How do I enforce security at the cluster level without adding friction? How do I give the right access to the right people without over-provisioning? That's what we focused on in March. We shipped an MCP Server that lets you manage Hyperstack through natural language - no raw API calls, no dashboard hopping. We added a Firewall API so security rules propagate automatically across Kubernetes clusters. We published a full deployment guide for NVIDIA's NemoClaw following its GTC 2026 announcement. And we improved cluster deployment defaults so things work correctly out of the box. Plus role management improvements, better error handling and VM lifecycle fixes. Infrastructure should get out of your way, not create more work. Full March update here: bit.ly/4vxur3o #Kubernetes #NemoClaw
Hyperstack tweet media
English
0
0
7
53
Hyperstack retweetledi
NexGen Cloud
NexGen Cloud@CloudNexgen·
We are thrilled to announce that Prasanna Rengarajan has joined NexGen Cloud as our Chief Financial Officer. Prasanna brings extensive experience across corporate finance, capital management, and investor relations, built over a career spanning investment banking, telecommunications, and institutional investing. He has held senior roles at some of the world's most respected financial institutions, including Goldman Sachs, Barclays, the Bank of England, and euNetworks. In his new role, Prasanna will lead NexGen Cloud's financial strategy and capital planning - driving financing initiatives, strengthening fiscal discipline, and positioning the company for its next phase of accelerated growth. His appointment marks a significant milestone in building the world-class leadership team needed to support the rapidly growing demand for @Hyperstackcloud, our GPU cloud platform, and to advance our long-term strategic goals. As AI infrastructure continues to redefine the cloud landscape, having the right financial leadership in place is critical to scaling responsibly and ambitiously. We are excited to have Prasanna on board as we enter this next chapter. "AI infrastructure is redefining the cloud landscape. I'm delighted to join NexGen Cloud at such a pivotal stage to help drive sustainable growth through disciplined financing, capital market partnerships, and strategic investment in our next phase of expansion." - Prasanna Rengarajan, CFO, NexGen Cloud.
NexGen Cloud tweet media
English
0
1
5
126
Hyperstack
Hyperstack@Hyperstackcloud·
Your infrastructure, one prompt away ⚡ We shipped the Hyperstack MCP server. Here's what you get: ✔️ One-line deployment - spin it up with Docker and you're ready to go ✔️ Talk to your infrastructure - ask about your VMs, storage, clusters, billing, and more ✔️ No dashboards, no CLI commands, just plain English ✔️ AI Studio Inference integration - connect with your favourite models from Hyperstack AI Studio ✔️ 37+ tools - all from a single conversational interface Watch what it can do ↓ 👉 Get started now: bit.ly/4c87kVr #MCP #OpenSource
English
0
1
9
72