Kelvin (Keqiang) Yan

312 posts

Kelvin (Keqiang) Yan banner
Kelvin (Keqiang) Yan

Kelvin (Keqiang) Yan

@KeqiangY

Postdoc @Princeton CS. DE Shaw Research Doctoral Fellow. AI and LLMs for Science. Materials, Proteins, Molecules. Ex @MSFTResearch @TAMU @PKU1898.

शामिल हुए Eylül 2020
1.2K फ़ॉलोइंग2.2K फ़ॉलोवर्स
Tian Xie
Tian Xie@xie_tian·
Personal update: I have left Microsoft a couple of weeks ago. I am starting my new role as a founding member of technical staff at Project Prometheus. I will be helping to build a new team here. It has been an incredible 4 years at Microsoft Research AI for Science. We have built MatterGen and MatterSim as a flywheel to accelerate materials discovery. We have synthesized these materials and tested their performance in the real world. It’s been an honor to work with everyone and make all of these into a reality. We are hiring across London, San Francisco, and Zurich at Project Prometheus. Please reach out if you are interested in joining us.
English
39
23
666
108.8K
Kexin Huang
Kexin Huang@KexinHuang5·
Today we’re launching Phylo, a research lab studying agentic biology, backed by a $13.5M seed round co-led by @a16z and @MenloVentures / Anthology Fund @AnthropicAI. We’re also introducing a research preview of Biomni Lab, the first Integrated Biology Environment (IBE), where we’re imagining a new way biologists work. Biomni Lab uses agents to orchestrate hundreds of biological databases, software tools, molecular AI models, expert workflows, and even external research services in one workspace, supporting research end-to-end from question to experiment to result. Agents handle the mechanics, while you define the question, then review, steer, and decide. Scientists end up spending more time on science: asking questions, understanding mechanisms, and eliminating diseases. Phylo (@phylo_bio) is a spin-out of @ProjectBiomni, where we will maintain the open-source community and push open-science research. I’m grateful to continue building with my co-founders @YuanhaoQ @jure @lecong and the dream founding team @serena2z @TianweiShe @huangzixin20151 @gm2123 @margaretwhua @malayhgandhi. We’re also fortunate to be advised by leading scientists @zhangf, Carolyn Bertozzi, and @fabian_theis, and supported by an amazing group of investors including @JorgeCondeBio @zakdoric Matt Kraning @ZettaVentures @dreidco @conviction @saranormous @svangel @valkyrie_vc and others. Biomni Lab is available for free today: biomni.phylo.bio Learn more in our launch post: phylo.bio/blog/company-f… We are also hosting launch events - join us at South San Francisco: luma.com/n8k8qb0n Virtual: luma.com/l5ryjaij We’re also hiring! phylo.bio/careers
English
112
242
1.7K
435.4K
Yuanqi Du
Yuanqi Du@YuanqiD·
I’m sad for the OpenReview bug, it’s destroying review and credit system built in many years. I’m afraid it would continue affecting us for many years later. It does reveal many parties of interest in this process. On the good side, is it a good time to test a new system?
English
4
1
28
4.9K
Kelvin (Keqiang) Yan
Kelvin (Keqiang) Yan@KeqiangY·
I will be at #NeurIPS2025 next week from Wednesday, and I look forward to meeting with old and new friends! Just let me know if you want to talk about AI for Science, AI for Materials Discovery, LLMs for Science, and Agentic systems~😆
English
1
1
10
1.1K
Asuka🎀Redpanda
Asuka🎀Redpanda@VoidAsuka·
I announce that starting today, I am entering into a deep collaboration with OpenAI, with full integration of ChatGPT. If there’s anything you don’t know, you can ask me, I’ll ask ChatGPT on your behalf.
English
6
0
24
4.5K
Xiang Fu
Xiang Fu@xiangfu_ml·
I joined @periodiclabs in May. We’re building AI scientists + autonomous labs to create breakthroughs you can hold. Longer take: xiangfu.co/science More on Periodic: periodic.com
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
18
22
347
53.1K
Rishabh Anand
Rishabh Anand@rishabh16_·
@KeqiangY @iclr_conf Yah! I really like this format of choosing papers to review :D Beats having to *rank* all the papers
English
1
0
1
309
Kelvin (Keqiang) Yan
Kelvin (Keqiang) Yan@KeqiangY·
Received the hard copy of our book on AI for Science from my dear PhD supervisor @ShuiwangJi ✨ Check it out on Foundations and Trends in Machine Learning if interested!
Kelvin (Keqiang) Yan tweet mediaKelvin (Keqiang) Yan tweet media
English
0
1
17
1.9K
Tian Xie
Tian Xie@xie_tian·
I am honored to join this year’s MIT technology review’s innovators under 35 list. Amazing to see the impact of generative AI on materials design being recognized in this prestigious list. Credits to the whole MatterGen team for making this happen.
Microsoft Research@MSFTResearch

We’re thrilled to share that Tian Xie, Principal Research Manager at Microsoft Research AI for Science, has been named to MIT Technology Review’s 2025 Innovators Under 35! Tian recently led the development of MatterGen, our generative AI model for materials discovery. msft.it/6010s9dse

English
10
2
107
12.3K
Ada Fang
Ada Fang@AdaFang_·
I’ve joined @GoogleDeepMind as a Student Researcher working on AI and genomics in London 🧬🇬🇧
Ada Fang tweet media
English
229
257
8.1K
655K
Prof. Anima Anandkumar
Prof. Anima Anandkumar@AnimaAnandkumar·
Very pleased to see our AI model GenSLM designing novel and versatile enzymes in a challenging setting in @francesarnold lab. biorxiv.org/content/10.110… Back in 2022, we built the world's first genome-scale language model (GenSLM) trained on all known bacterial and viral genomes with @arvindr_ It encodes biological sequence patterns at the codon level, directly mirroring the translation process from DNA to protein, and also enables the model to incorporate synonymous substitutions while learning protein-level effects. In the latest work, we demonstrated how GenSLM can create novel enzymes in the tryptophan synthase (TrpB) family. It is an attractive industrial biocatalyst, and considerable effort has been devoted to engineering variants using methods such as directed evolution, for which Frances Arnold won the Nobel Prize. However, directed evolution is time consuming since it is done in a wet lab, and also needs a starting candidate which may not be always optimal. We finetuned GenSLM on natural TrpBs and developed a careful filtering pipeline to generate promising new enzymes. To our surprise, several AI generated candidates outperformed both natural and laboratory-optimized TrpBs on native and non-canonical substrates. Further, this enhanced functional versatility does not stem from the natural enzyme, highlighting the creative potential of generative AI. AI can generate complex enzymes which not only preserve natural structure and function but also acquire non-natural properties. This is only the beginning! #AIGuidedDesign #ProteinEngineering #EnzymeOptimization #Biocatalysts #GenerativeAI @jsunn_y @francesarnold @arvindr_
Prof. Anima Anandkumar tweet media
English
4
28
121
14.9K
Hongliang Xin
Hongliang Xin@XinGroup_VT·
Compute may be finite, but ideas are not. Excited to catch up with @victorxfung
Hongliang Xin tweet media
English
2
1
23
2.1K
Weijie Su
Weijie Su@weijie444·
Our paper got accepted to Physical Review E (originally accepted at Physical Review Research, then transferred to waive fees). My first experience publishing in a physics venue and it was fun!
Weijie Su@weijie444

New Research (w/ amazing @hangfeng_he) "A Law of Next-Token Prediction in Large Language Models" LLMs rely on NTP, but their internal mechanisms seem chaotic. It's difficult to discern how each layer processes data for NTP. Surprisingly, we discover a physics-like law on NTP:

English
2
4
38
3K
Jure Leskovec
Jure Leskovec@jure·
Reinforcement learning leads to better AI scientist agents! 🚀 By training models end-to-end with multi-turn RL, we’re seeing breakthroughs in reasoning and problem-solving for real biomedical research. Excited to introduce Biomni-R0 — an agentic LLM trained with this approach. On 10 real research tasks, it nearly doubles performance over its open-source base model and even surpasses closed-source frontier models by >10%. A scalable path to expert-level AI in biomedicine. Led by @RyanLi0802 @KexinHuang5 @ProjectBiomni with exciting collaboration with the SkyRL team @shiyi_c98 @NovaSkyAI. Learn more: biomni.stanford.edu/blog/biomni-r0… — open sourcing soon!
Jure Leskovec tweet media
English
8
65
271
29.4K
Kelvin (Keqiang) Yan
Kelvin (Keqiang) Yan@KeqiangY·
@ddvd233 太逗了,会不会是搜关键词看有多少条match,然后根本没打开看
中文
1
0
2
265
dvd@dvd.chat
[email protected]@ddvd233·
Claude Research 一下找了 313 个 source...现在是不是比较流行比谁的 source 比较多(
dvd@dvd.chat tweet media
中文
14
0
31
17.4K
Kangwook Lee
Kangwook Lee@Kangwook_Lee·
Happy to share that I got tenured last month! While every phase in life is special, this one feels a bit more meaningful, and it made me reflect on the past 15+ years in academia. I'd like to thank @UWMadison and @UWMadisonECE for tremendous support throughout the past six years, helping me grow. I am very grateful to all the teachers I’ve met in the past 15+ years of research since undergrad. Prof. Sae-Young Chung introduced me to engineering, and in particular, information theory. Prof. Yung Yi and Prof. Song Chong introduced me to communication network theory, and from Prof. Yung Yi I learned the true passion for research. I miss him a lot. At Berkeley, I learned everything about research from my advisor Prof. Kannan Ramchandran. In particular, I learned that the most important motivation behind great research is endless curiosity and the desire to really understand how things work. From my postdoc mentor Prof. Changho Suh at KAIST, I learned the mindset of perfection, making every single paper count. During my assistant professorship, I was lucky to have the best colleagues. I learned so much from Rob (@rdnowak) and Dimitris (@DimitrisPapail). I am still learning from Dimitris' unique sense of research taste and Rob's example of how to live as the coolest senior professor. I also learned a lot from the Optibeer folks Steve Wright, Jeff Linderoth, and my ECE colleagues Ramya (@ramyavinayak) and Grigoris (@Grigoris_c). Thank you all! I’d like to thank my former students and postdocs too. Daewon and Jy-yong (@jysohn1108) joined my lab early on and worked on many interesting projects. Changhun and Tuan (@tuanqdinh) joined midway through his PhD and worked on interesting research projects, and in particular, Tuan initiated our lab’s first LLM research five years ago! Yuchen (@yzeng58), Ziqian (@myhakureimu), and Ying (@yingfan_bot) joined around the same time, and working with them has been the most fun and rewarding part of my job. Each took on a challenging topic and did great work. Yuchen advanced LLM fine-tuning, especially parameter-efficient methods. Ziqian resolved the mystery of LLM in-context learning. Ying explored "a model in a loop," focusing on diffusion models and looped Transformers. They all graduated earlier this year and are continuing their research at @MSFTResearch and @Google. Best wishes! 🥰 I am also grateful for co-advising Nayoung (@nayoung_nylee), Liu (@Yang_Liuu), and Joe (@shenouda_joe) with Dimitris and/or Rob. Nayoung's work on Transformer length generalization, Liu's on in-context learning, and Joe's on the mathematical theory of vector-valued neural networks are all very exciting. They are all graduating very soon, so stay tuned! (And reach out to them if you have great opportunities!) I also had the pleasure of working with master's students Ruisu, Andrew, Jackson (@kunde_jackson), Bryce (@BryceYicongChen), and Michael (@michaelgira23), as well as many visiting students and researchers. Thank you for being such great collaborators. I’d like to thank and introduce the new(ish) members too. Jungtaek (@jungtaek_kim) and Thomas are studying LLM reasoning. Jongwon (@jongwonjeong123) just joined, and interestingly he was an MS student in Prof. Chung’s lab at KAIST, which makes him my academic brother turned academic son. Ethan (@ethan_ewer), Lynnix, and Chungpa (visiting) are also working on cool LLM projects! Thank you to @NSF, @amazon, @WARF_News, @FuriosaAI, @kseayg, and KFAS for generous funding. I also learned a lot from leading and working with the AI team at @Krafton_AI, particularly with Jaewoong @jaewoong_cho, so thank you for that as well. Last and most importantly, thanks to my family! ❤️ I only listed my mentors and mentees here, not all my amazing collaborators, but thank you all for the great work together. With that, I’m excited for what’s ahead, and so far no "tenure blues." Things look the same, if not more exciting... haha!
English
63
6
296
24.1K
Kexin Huang
Kexin Huang@KexinHuang5·
🚀 Thrilled to share a preview of Biomni-R0 — we trained the first RL agent end-to-end for biomedical research. ➡️ nearly 2× stronger than its open-source base ➡️ >10% better than frontier closed-source models ➡️ Scalable path to hill climb to expert-level performance 🔗 Technical report: biomni.stanford.edu/blog/biomni-r0… Collab between @ProjectBiomni and SkyRL @NovaSkyAI, with the amazing @RyanLi0802 @shiyi_c98 @YuanhaoQ @jure. Open-sourcing soon!
Kexin Huang tweet media
English
20
68
317
70.5K