

Xiaocong Yang
40 posts

@xy51_uiuc
PhD student @illinoisCS. Founder of AI Interpretability @ Illinois. Alumni @Tsinghua_uni




We are recruiting PhD research interns for 2026. We focus on generative AI areas such as long-horizon reinforcement learning, LLM with tool/skill/episodic memories, LLMaaJ with non-verifiable rewards, and multi-modal agents, but not limited to these. The ideal candidates are current PhD students with 1st-author publications in top NLP/ML venues such as ACL, NAACL, EMNLP, NeurIPS, ICLR, and ICML. If you are interested in this opportunity, please email me (jookyk at amazon.com) with your CV. linkedin.com/jobs/view/4336…










BREAKING 🚨 Anthropic just unveiled "Cowork," a major feature that turns Claude into a fully autonomous virtual assistant for everyone. It brings the deep agentic capabilities previously reserved for coders to general users, allowing Claude to perform complex tasks directly on your computer. The tool was built after Anthropic noticed developers using "Claude Code" for everyday admin tasks. Cowork now lets anyone grant Claude access to folders to manage files, research, and complete multi-step workflows independently acting as a digital employee that "does" instead of just chats. Cowork is available today as a research preview for Claude Max subscribers on the macOS app 😡. Claude is releasing new coding/desktop agents much faster than all of there competitors. This launch exposes a massive gap in the current AI landscape: in 2026, Google still explicitly lacks a consumer browser agent, and xAI has yet to release a native CLI or agentic interface. While OpenAI has "Operator" and Google has the developer-focused "Antigravity," Anthropic is now the only other major lab providing a true "do-it-for-me" experience for general users.





The ELLIS Institute is proud to announce that @coeff_giving is supporting our Principal Investigator @maksym_andr with a grant of $1,000,000 to fund his research on AI safety. Find out more on our website: institute-tue.ellis.eu/en/news/pi-mak…

New Anthropic Research: next generation Constitutional Classifiers to protect against jailbreaks. We used novel methods, including practical application of our interpretability work, to make jailbreak protection more effective—and less costly—than ever. anthropic.com/research/next-…

New year's read 📔 -- "Physics of AI Requires Mindset Shifts." I argue that "Physics of AI" research is hard due to the current publishing culture. But there is a simple solution -- curiosity-driven open research. kindxiaoming.github.io/blog/2025/phys…



For his debut TDS article, @xy51_uiuc explains how neural and symbolic models compress the world in fundamentally different ways, and how Sparse Autoencoders (SAEs) offer a bridge to connect them. towardsdatascience.com/neuro-symbolic…




