StorageReview.com

11.2K posts

StorageReview.com banner
StorageReview.com

StorageReview.com

@storagereview

https://t.co/Sd7aWQerOs offers news and in-depth reviews of the entire enterprise IT stack from AI to end user computing and everything in between.

Cincinnati, OH Katılım Kasım 2011
247 Takip Edilen14.1K Takipçiler
NVIDIA AI PC
NVIDIA AI PC@NVIDIA_AI_PC·
Be honest — how many local models do you have downloaded right now? 👀
English
501
21
714
86.1K
StorageReview.com
StorageReview.com@storagereview·
QNAP just announced the QAI-h1290FX, an edge AI storage server built specifically for organizations that want to run LLMs and generative AI workloads on their own hardware. The pitch is straightforward: private AI without the cloud. Run RAG pipelines, LLMs, and generative AI applications locally, keeping sensitive data off public cloud platforms entirely. The system targets enterprises where data privacy, latency, and governance requirements make cloud AI a non-starter. This fits a pattern we are seeing across the industry right now. The conversation has shifted from "can we run AI" to "can we run AI without giving up control of our data." Purpose-built edge AI hardware with local storage is becoming its own product category fast. @QNAP_nas #AI #edgeAI #LLM #storage
English
2
0
7
864
Bokiko
Bokiko@bokiko·
To RTX 6000 or not ?
Bokiko tweet media
English
25
1
135
9K
StorageReview.com
StorageReview.com@storagereview·
@planedrop We'd call this one more chaos, thin line between that and disaster though, will concede that.
English
1
0
1
505
StorageReview.com
StorageReview.com@storagereview·
@AMD We’re thrilled to get more access to AMD accelerated platforms. Just pulled in a Dell 9680 with 8-way MI300X. Our power bill is sobbing but the results should be amazing to prove out a ton of inference workloads.
English
0
1
2
146
AMD
AMD@AMD·
"Competitive inference performance across the workload profiles." The latest MI350X + ROCm 7.2 shows strong performance, broader framework support, and the momentum building around AMD hardware across the AI ecosystem. Full review via @StorageReview: storagereview.com/review/supermi…
English
8
20
171
18K
StorageReview.com
StorageReview.com@storagereview·
The Dell XE7740 is designed to handle any accelerators that customers want to use. We're re-deploying ours right now for a new set of inference work with 4x NVIDIA RTX Pro 6000 GPUs. @DellTech @NVIDIAAI
English
4
0
16
1.4K
StorageReview.com
StorageReview.com@storagereview·
@teromee We're hearing a lot that customers would rather be able to run two distinct workloads on two servers than have all the cards in one. Maybe for some it's more of a scheduling issue.
English
1
0
0
13
teromee
teromee@teromee·
@storagereview I go with 2 but I am about redunancy rather than a single point of failure. but if we talkin other stuff that breaks because the differences I am going with a single server w/8GPUs.
English
1
0
2
19
StorageReview.com
StorageReview.com@storagereview·
Help settle a debate. Budget isn't a constraint. What would you rather have? Two GPU servers with 4 RTX Pro 6000s each or one GPU server with 8 RTX Pro 6000 GPUs?
English
7
1
7
1.5K
StorageReview.com
StorageReview.com@storagereview·
@planedrop No one can hear you cry in the data center. And man, the hypervisors have fallen so far off for enterprise inference.
English
1
0
2
54
Ethan Word
Ethan Word@planedrop·
@storagereview I think I'd say 2, but it really depends on use case and other restraints. Like, are we going to pretend the 8 in a single chassis won't be way louder? What about vGPU slicing with a hypervisor? Then it doesn't really matter unless you need more CPU. etc...
English
1
0
0
76