
Our newest AI accelerator Maia 200 is now online in Azure. Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems. And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth it's optimized for large-scale AI workloads. It joins our broader portfolio of CPUs, GPUs, and custom accelerators, giving customers more options to run advanced AI workloads faster and more cost-effectively on Azure.



















