Alexandra retweetledi

PyTorch 2.11 is now available, featuring 2,723 commits from 432 contributors since PyTorch 2.10. This release prioritizes performance scaling for distributed training and next-generation hardware architectures.
Highlights include a FlashAttention-4 backend for FlexAttention on Hopper and Blackwell GPUs, Differentiable Collectives for distributed training, and performance optimizations for Intel GPUs via XPU Graph. This release also delivers comprehensive operator expansion for Apple Silicon (MPS) and RNN/LSTM GPU export support.
🖇️ Read the PyTorch 2.11 release blog and release notes: pytorch.org/blog/pytorch-2…
#PyTorch #OpenSource #AIInfrastructure

English
































