Joe Fernandes

4K posts

Joe Fernandes

Joe Fernandes

@joefern1

VP & GM - Red Hat AI Business Unit

Boston Area Katılım Aralık 2007
454 Takip Edilen2.9K Takipçiler
Joe Fernandes retweetledi
Red Hat
Red Hat@RedHat·
How do you solve AI's biggest performance hurdles? On Technically Speaking, @kernelcdub & Nick Hill dive into vLLM, exploring how techniques like PagedAttention solve memory bottlenecks & accelerate inference: red.ht/4lDjJ5P.
Red Hat tweet media
English
0
6
23
4.7K
Joe Fernandes retweetledi
Red Hat AI
Red Hat AI@RedHat_AI·
.@_llm_d_ is a Kubernetes-native distributed LLM inference framework designed for fast, scalable serving across hardware accelerators with strong performance per dollar. Learn how it works (and see a live demo) at this week’s @vllm_project office hours. Join details below 👇
Red Hat AI tweet media
English
2
5
34
1.3K
Joe Fernandes retweetledi
Red Hat AI
Red Hat AI@RedHat_AI·
Llama 4 Herd is here! It brings a lot of goodies, like MoE architecture and native multimodality, enabling developers to build personalized multimodal experiences. With Day 0 support in vLLM, you can deploy Llama 4 with @vllm_project now! Let's dig into it. (a thread)
Red Hat AI tweet media
English
5
47
185
148.4K
Joe Fernandes retweetledi
Eldar Kurtić
Eldar Kurtić@_EldarKurtic·
Quantization in the era of reasoning models: How does quantization impact the reasoning capabilities of DeepSeek-R1 models across distilled Llama and Qwen families? 👇 Check the thread for two surprising findings in evaluations of these models!
Eldar Kurtić tweet media
English
12
55
312
52K
Joe Fernandes retweetledi
Isha Puri
Isha Puri@ishapuri101·
[1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint @MIT_CSAIL / @RedHat AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out …abilistic-inference-scaling.github.io
Isha Puri tweet media
English
2
65
235
44.9K
Joe Fernandes retweetledi
Red Hat AI
Red Hat AI@RedHat_AI·
.@IBM's Granite models are known for efficiency and performance. Neural Magic team at @RedHat optimized them even further in Granite 3.1! 🪶 3.3x smaller ⚡ Up to 2.8x faster 🎯 99% accuracy retention Here’s how these improvements unlock efficient open-source AI deployments 🧵
Red Hat AI tweet media
English
2
12
25
1.5K
Joe Fernandes retweetledi
Mike Barrett
Mike Barrett@gadfly_io·
Great talk from Henrique Romao Oliveira from @BancoOriginal and Gabriel Sampaio from Red Hat talking about their work on Pix @openshiftcommon
Mike Barrett tweet mediaMike Barrett tweet media
English
0
3
6
1.3K
Joe Fernandes retweetledi
Mike Barrett
Mike Barrett@gadfly_io·
Good points @openshiftcommon from John Maciocci and Suresh Ganesan @CitizensBank about how you need to keep driving skill set and education once you have a platform in place to keep it successful.
Mike Barrett tweet media
English
0
4
6
1.3K
Joe Fernandes
Joe Fernandes@joefern1·
“Taming the pets: Managing a heterogenous and sovereign multi-cloud container platform”. Long time @openshift customer Swiss Federal Railways talks about managing large fleet of #Kubernetes clusters across AWS, Azure and T-Systems Open Telekom Cloud @openshiftcommon @RedHatSummit
Joe Fernandes tweet mediaJoe Fernandes tweet mediaJoe Fernandes tweet mediaJoe Fernandes tweet media
English
0
3
10
1.6K
Joe Fernandes
Joe Fernandes@joefern1·
“Safely Navigating Storm Clouds” with @openshift running containers and VMs across 100+ weather forecast office locations at National Oceanic and Atmospheric Association (NOAA). @openshiftcommon @RedHat
Joe Fernandes tweet mediaJoe Fernandes tweet mediaJoe Fernandes tweet media
English
0
1
7
1.1K