

Abhinav Bhatele
502 posts

@bhatele
Prof @UMDCS and @umiacs, Lead Parallel Software & Systems Group @hpc_group. @IITKanpur & @UofIllinois Alum. Ex-@Livermore_Lab. Views are my own.



On Mar 25, we set a personal record with 2 PhD defenses in a day! Congrats to @onurcankur & @jhdavis_josh for their excellent work: Onur's title: "Longitudinal Data Analytics of HPC Systems & Applications" Josh's title: "LLM-driven Development of Performant & Portable GPU Codes"

We have arrived in St. Louis for @Supercomputing 2025. Learn more about our group's research through the various talks, panels and tutorials below. We will also be at the @UofMaryland booth (3123) at various times. #SC25 #UMDCS #HPC #AI #MLSys #AI4Science

A large number of PhD students in my group have graduated or will be graduating by Spring, so I am recruiting several PhD students for the next admission cycle (Fall 2026). If you want to work with us, apply by Dec 5 and drop me a short email. Please repost/share widely. #HPC #AI



We are on a roll, second successful dissertation defense in a week (March 28)! Congratulations to @siddharth_3773 on becoming the second PhD graduate from PSSG!! Dissertation title: "Optimizing Communication in Parallel Deep Learning on Exascale-class Machines" #HPC #AI #HPC4AI


We present our very first newly minted Dr. Daniel Nichols (@DanielNichols10), who successfully defended his dissertation today. Congrats & best wishes for a bright future ahead!! Dissertation Title: "On Learning Behaviors of Parallel Code and Systems Across Modalities" #HPC #AI





Can we get away w/ reducing attention keys to a lower-dimensional space to optimize compute during inference? @prajwal1210 & @siddharth_3773 investigated using PCA on key vectors & found that the rank of attention keys is much lower than the full dimensionality. #NeurIPS2024

Can we get away w/ reducing attention keys to a lower-dimensional space to optimize compute during inference? @prajwal1210 & @siddharth_3773 investigated using PCA on key vectors & found that the rank of attention keys is much lower than the full dimensionality. #NeurIPS2024

Introducing 🧮Abacus Embeddings, a simple tweak to positional embeddings that enables LLMs to do addition, multiplication, sorting, and more. Our Abacus Embeddings trained only on 20-digit addition generalise near perfectly to 100+ digits. 1/n





