Our paper "On the Alignment of Large Language Models with Global Human Opinion" has been accepted to the AAAI 2026 AIA track!
Great thanks to co-authors @MasahiroKaneko_ and @knccch.
arxiv.org/abs/2509.01418
A recent study (shared below) reminded me that 𝐰𝐡𝐚𝐭 𝐦𝐚𝐤𝐞𝐬 𝐚 𝐏𝐡𝐃 𝐬𝐭𝐮𝐝𝐞𝐧𝐭 𝐡𝐚𝐩𝐩𝐲 𝐢𝐬 𝐧𝐨𝐭 𝐩𝐮𝐛𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 𝐨𝐫 𝐬𝐭𝐢𝐩𝐞𝐧𝐝𝐬, 𝐛𝐮𝐭 𝐠𝐨𝐨𝐝 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐢𝐨𝐧.
Over the years, I have supervised 53 PhD students, and I can say with conviction that 𝘁𝗵𝗲𝗿𝗲 𝗶𝘀 𝗻𝗼 𝘀𝗶𝗻𝗴𝗹𝗲 𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗼𝗿𝘆 𝘀𝘁𝘆𝗹𝗲 that works for all. Each student is different, and every PhD journey has its ups and downs.
With some students, I spent a disproportionate amount of time helping them gain confidence. With stronger ones, my role was to 𝗴𝘂𝗶𝗱𝗲 𝘁𝗵𝗲𝗺 𝘂𝗻𝘁𝗶𝗹 𝘁𝗵𝗲𝘆 𝗳𝗼𝘂𝗻𝗱 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗮𝗻𝗱 𝗯𝗲𝗰𝗮𝗺𝗲 𝘀𝗲𝗹𝗳-𝗱𝗿𝗶𝘃𝗲𝗻.
𝗘𝗺𝗽𝗮𝘁𝗵𝘆 𝗺𝗮𝘁𝘁𝗲𝗿𝘀. Unless a student gets genuinely interested in what he or she is doing, great work rarely happens.
A good 𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗼𝗿 𝗺𝘂𝘀𝘁 𝗮𝗹𝘀𝗼 𝗵𝗮𝘃𝗲 𝗴𝗼𝗼𝗱 𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽𝘀 𝘄𝗶𝘁𝗵 𝗰𝗼𝗹𝗹𝗲𝗮𝗴𝘂𝗲𝘀. I never restricted my students from seeking advice from other faculty. In fact, I always encouraged it and built a network they could freely approach. I also followed an 𝗼𝗽𝗲𝗻-𝗱𝗼𝗼𝗿 𝗽𝗼𝗹𝗶𝗰𝘆 - any student could walk into my office anytime for advice.
I never had a lab of my own in any institution I worked at. Instead, I built common labs, shared by all, to promote collaboration and collective ownership. I have always believed that the 𝗯𝗲𝘀𝘁 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝘁𝗵𝗼𝘀𝗲 𝘄𝗵𝗲𝗿𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗮𝗻𝗱 𝗳𝗮𝗰𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗮𝗿𝗲 𝘀𝗵𝗮𝗿𝗲𝗱, 𝗻𝗼𝘁 𝗼𝘄𝗻𝗲𝗱.
I have always taken 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗳𝗼𝗿 𝗲𝗻𝘀𝘂𝗿𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝗺𝘆 𝗣𝗵𝗗 𝘀𝘁𝘂𝗱𝗲𝗻𝘁𝘀 𝗳𝗶𝗻𝗱 𝘁𝗵𝗲𝗶𝗿 𝗳𝗶𝗿𝘀𝘁 𝗷𝗼𝗯 𝗮𝗳𝘁𝗲𝗿 𝗴𝗿𝗮𝗱𝘂𝗮𝘁𝗶𝗼𝗻. Since most of them worked on industry-supported projects, this was never a problem.
Nearly 90% of my PhD students wrote their first peer-reviewed paper with me.
Unlike in the US, where professors often get students with prior research experience, 𝗶𝗻 𝗜𝗻𝗱𝗶𝗮 𝘄𝗲 𝘄𝗼𝗿𝗸 𝘄𝗶𝘁𝗵 𝗿𝗮𝘄 𝘁𝗮𝗹𝗲𝗻𝘁 𝗮𝗻𝗱 𝗵𝗲𝗹𝗽 𝘀𝗵𝗮𝗽𝗲 𝘁𝗵𝗲𝗺 𝗶𝗻𝘁𝗼 𝗰𝗮𝗽𝗮𝗯𝗹𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀. Faculty in Indian institutions (at least in the institutions where I worked so far - IIT Bombay, IIT Delhi and BITS Pilani) often put in far more time and effort per student than many of our counterparts abroad.
The 𝗿𝗲𝗮𝗹 𝗿𝗲𝘄𝗮𝗿𝗱 𝗰𝗼𝗺𝗲𝘀 𝗳𝗿𝗼𝗺 𝘀𝗲𝗲𝗶𝗻𝗴 𝘀𝘁𝘂𝗱𝗲𝗻𝘁𝘀 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝗶𝗻𝘁𝗼 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀, 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗼𝗿𝘀, 𝗮𝗻𝗱 𝘁𝗲𝗮𝗰𝗵𝗲𝗿𝘀. That makes everything worthwhile.
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
the best researchers from Meta, Yale, Stanford, Google DeepMind, and Microsoft laid out all we know about Agents in a 264-page paper [book],
here are some of their key findings:
Foundations of LLMs
This amazing new LLM book just dropped on arXiv.
200+ pages!
It covers areas such as pre-training, prompting, and alignment methods.
It looks like a great intro to LLMs for devs and researchers.
🎉Great news! Our paper “Speech Translation with Speech Foundation Models and Large Language Models: What is There and What is Missing?” got the outstanding paper & area chair's awards!!! 👏
👇arxiv.org/pdf/2402.12025#NLProc#ACL2024NLP
If you are not yet in the ARR reviewer/AC pool and would be interested in serving as an area chair for ARR 2024 June, please fill out this form by June 20: forms.office.com/pages/response…
If you haven't been invited to review for ARR 2024 June but are interested in helping us, please fill out this form by June 19: forms.office.com/pages/response…