John-Nicholas Furst

1.2K posts

John-Nicholas Furst banner
John-Nicholas Furst

John-Nicholas Furst

@JohnNFurst

Sr. HW Eng Manager at @Akamai | Driving server tech & datacenter efficiency | Passionate about CPUs & cloud innovations | Engaging with the tech community.

Cambridge, MA انضم Mayıs 2009
647 يتبع247 المتابعون
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
The @ASUS Dual GeForce 4060 Ti SSD has quite a unique twist. Combining a GPU with an M.2 SSD slot is smart, especially for small form factor builds. It's interesting how they've managed the PCIe lanes for both GPU and SSD. Definitely a creative solution for optimizing space and performance in compact systems.
English
0
0
0
85
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
The @westerndigital 24 TB hard drives are cool. It's incredible how HDDs are still evolving and expanding in capacity. Great for those needing massive storage
English
2
0
1
89
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
NTT and Amazon stepping into the satellite internet game in Japan is a big move. 🛰️ Teaming up for Project Kuiper, to compete against Starlink. With Japan's high-speed fiber network already in place, it'll be interesting to see how satellite internet fits into the mix, especially for remote areas and as a backup for terrestrial networks. #SpaceRace
English
0
0
1
107
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Interesting twist in the GPU market: @nvidia's 3090s are being repurposed for AI work in chinese data centers. Stripped of gamer coolers for more data center-friendly designs, it's a response to the recent GPU bans in China. Shows the adaptability of the tech, but also hints at potential new pressures on GPU availability and pricing.
English
0
0
1
85
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
@Google's new geothermal project with Fervo in Nevada sounds cool. Exploring different renewable energy sources to power data centers is definitely the way forward.
English
0
0
0
14
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Just read about @DARPA 's investment in 15 new projects aimed at enhancing the energy grid's reliability and efficiency. It's interesting to see a focus on incorporating more renewable energy sources through innovative semiconductor technologies. Seems like a significant step towards a sustainable and resilient power infrastructure. arpa-e.energy.gov/document/ultra…
English
0
0
1
70
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Reading about Neuchips RecAccel N3000 from SC23. Their innovative approach to AI inference with smaller, memory-rich chips is quite a game-changer for data centers and workstations. Looking forward to seeing how these advancements shape the AI landscape.
English
0
0
0
50
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
QUIC support in OpenSSL 3.2 is a game-changer, offering multi-stream capabilities for improved network protocol efficiency. While currently client-side, future releases aim to expand this. Exciting times ahead for secure communications! #QUICProtocol #SSL
English
0
0
0
49
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Big update in the world of cryptography: OpenSSL 3.2 is out! 🚀 A standout feature is the support for client-side QUIC, enhancing transport layer security. This major update paves the way for more robust and efficient web communications. #OpenSSL #Cybersecurity
English
1
0
0
82
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
LA's new region, along with Seattle, marks our continued commitment to the West Coast's tech ecosystem. With @AMD EPYC CPUs and diverse instance types, we're meeting the growing demand for media-focused workloads. #WestCoastTech #DataCenterExpansion
English
0
0
0
19
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Miami's region strengthens our connection to LATAM, capitalizing on its tech and venture capital boom. Perfectly positioned to serve this dynamic market, we're bringing even more of Akamai's power to a burgeoning business hub. #TechGrowth #LATAMExpansion
English
1
0
0
47
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Exciting news from @Akamai! We've expanded our global capacity with two new core compute regions in Los Angeles and Miami, now available for all customers. This significant growth boosts our compute capacity by 4X, enhancing our presence in key tech hubs. #Akamai #CloudComputing
English
1
0
0
40
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
Alongside H200, @NVIDIA's announcement of the HGX H200 platform and Quad GH200 board showcases their commitment to diverse, high-performance computing solutions. Excited to see how these developments shape the future of AI and HPC.
English
0
0
0
31
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
The H200's switch to HBM3E memory not only brings higher memory capacity (141GB) but also a 43% bandwidth increase. This is crucial for memory-intensive tasks, especially in AI.
English
1
0
0
23
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
@NVIDIA's latest H200 accelerator with HBM3E memory is set to revolutionize server GPU performance. Offering a significant bandwidth boost and larger workload capacity, it's a leap forward for AI and HPC applications.
English
1
0
1
23
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
@MicronTech, your new 128 GB DDR5 RDIMMs are a game-changer for servers, especially with the rise of LLMs and higher CPU core counts. The shift away from 3DS TSV to 32 Gb monolithic DDR5 dies not only promises higher energy efficiency but also heralds faster AI training. Kudos on achieving such impressive speed and density milestones with your 1β technology. #DDR5 #ServerInnovation
English
0
0
0
20
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
The @AMD Ryzen Threadripper 7000 series revitalizes the HEDT market with impressive core counts and Zen 4 architecture. Balancing the Pro and non-Pro variants shows keen insight into diverse user needs. A significant move for power users seeking more than mainstream CPUs offer. #Threadripper7000 #HEDTRevival
English
0
0
0
26
John-Nicholas Furst
John-Nicholas Furst@JohnNFurst·
@RajaXg, your insights on the evolving AI hardware landscape are invaluable. It's fascinating to see the diversity in viable options for AI models, moving beyond just NVIDIA. The progress made by Apple, AMD, and Intel, particularly with Intel's Ponte Vecchio and Gaudi, highlights the rapid advancements in this space. Your findings on various models and the current limitations underscore the dynamic nature of AI hardware development. Excited to see how this diversity will continue to enrich AI capabilities and applications.
English
0
0
3
1.7K
Raja Koduri
Raja Koduri@RajaXg·
Very encouraging to see the steady increase of viable hardware options that can handle various AI models. At the beginning of the year, there was only one practical option - nVidia. Now we see at-least 3 vendors providing reasonable options. Apple, AMD and Intel. We have been profiling several options and I will share some of our findings here. The good stuff - Apple Macs were a pleasant surprise on how easy it is to get various models running - AMD also made impressive progress with PyTorch and a lot more models run now than even 4-5 months ago on MI2XX and Radeon - We tried both Intel Arc and Ponte Vecchio and they were able to execute everything we have thrown at them. - Intel Gaudi has very impressive performance on the models that work on that architecture. It's our current best option for LLM inference on select models. - Ponte Vecchio surprised us with its performance on our custom face swap model, beating everyone including the mighty H100. We suspect that our model may be fitting largely in the Rambo cache. The wishlist - For training and inference of large models that don't fit in memory - nVidia is still the only practical option. Wishing that there are more options in 2024 here - While compatibility is getting better, a ton of performance is still left on the table on Apple, AMD and Intel. Wishing that software will keep getting better and increase their HW utilization. There is still room on compatibility as well, particularly with supporting various encodings and model parameter sizes on AMD. - Intel Gaudi looks very promising performance-wise and wishing that more models seamlessly work out of the box without Intel intervention. - Wishing that both AMD and Intel release new gaming GPUs with more memory capacity and bandwidth. - Wishing that Intel releases a PVC kicker with more memory capacity and bandwidth. Currently it's the best option we have to bring our artists workflow with face swap training from 3-days to a few hours. It scales linearly from 1-GPU to 16-GPUs. - Wishing Intel support for PyTorch is as frictionless as AMD and nVdia. May be Intel should consider supporting PyTorch RocM or up-stream OneAPI support under CUDA device. really grateful to all vendors for providing access to hardware and developer support. Looking forward to continue filling our data center with interesting mix of architectures.
English
18
78
406
124K