Hans Hanley
620 posts

Hans Hanley
@Hans_Hanley
Member of Technical Staff @MicrosoftAI. PhD @Stanford. Blog at https://t.co/UtFg2GaKYQ

Opus 4.7 has a new tokenizer. This means it's also a new base model. Glory days of pretraining still very much going.

Our paper on Subliminal Learning was just published in Nature! Last July we released our preprint. It showed that LLMs can transmit traits (e.g. liking owls) through data that is unrelated to that trait (numbers that appear meaningless). What’s new?🧵





We're incredibly proud to congratulate our co-founder and CTO, @matei_zaharia, on receiving the ACM Prize in Computing for his development of distributed data systems that have enabled large-scale machine learning, analytics, and AI. Matei's open-source contributions have fundamentally changed how organizations work with data and AI — including Apache Spark™, Delta Lake, and MLflow. Researchers, nonprofits, startups, and enterprises across every industry have built on the foundation he helped create. Now he's pushing the frontier further, focusing on building and scaling reliable AI agents through open-source research like DSPy and GEPA. Matei, this recognition is so well deserved. We're honored to build alongside you every day. awards.acm.org/about/2025-acm…


Gemma 4 and what makes an open model succeed Hint: it's not benchmark scores. interconnects.ai/p/gemma-4-and-…




Announcing NVIDIA Nemotron 3 Super! 💚120B-12A Hybrid SSM Latent MoE, designed for Blackwell 💚36 on AAIndex v4 💚up to 2.2X faster than GPT-OSS-120B in FP4 💚Open data, open recipe, open weights Models, Tech report, etc. here: research.nvidia.com/labs/nemotron/… And yes, Ultra is coming!



(1/8) Reasoning language models are great at math and code – but what about remembering facts stored in their parameters? Excited to share work with @johnhewtt exploring this! TL;DR: we don't usually think of RLVR as useful for knowledge recall from parameters, but it helps a lot.







