Chi Chen

173 posts

Chi Chen

Chi Chen

@chc273

Quantum Applications at IonQ. Views are mine.

Redmond, WA Katılım Kasım 2020
500 Takip Edilen566 Takipçiler
Sabitlenmiş Tweet
Chi Chen
Chi Chen@chc273·
In reflecting on 2025, I keep coming back to how quickly AI moved from “interesting” to genuinely transformative for science and engineering. Over the last few months in particular, I’ve felt a real shift. For the first time, AI agents can reliably take on meaningful research engineering tasks and produce results that are actually useful fairly autonomously after receiving a specific ask. That shift has changed how I spend my time. My workflow has flipped: I now spend ~90% on problem definition, system design/architecture, validation, and review, and perhaps less than 10% on implementation. Not long ago, it was the other way around. It’s been both exciting and humbling. The leverage is real, but the responsibility to verify and steer the work is higher than ever. I’ve also been energized by the momentum in AI for materials science and physical AI, especially seeing so many strong teams and startups forming around real technical problems and the Genesis Mission executive order. It feels like an inflection point, one where the frontier is less about whether these tools can work, and more about how we build trustworthy, repeatable workflows around them. To close the year, I’m sharing a 148-page review paper generated by zero-shot an AI agent I built for fun. This is something I used to daydream about—and now it’s surprisingly accessible. It’s not perfect, but the trajectory is very promising. Moments like this give me a lot of hope that AI will significantly accelerate the pace of science. 2026 will be another big year for AI for Science. Prompt: "Impacts and potentials of quantum computing for artificial intelligence" Paper link: cchen.me/files/quantum_… Blog: cchen.me/posts/2025-12-… #AI4Science #AIAgent #materialsscience #physicalai #quantum
English
1
2
10
1.1K
Chi Chen
Chi Chen@chc273·
Fun weekend playaround: browser-native M3GNet in JavaScript cchen.me/fun/ ✅ Predict crystal properties: band gap, formation energy, refractive index, shear/bulk modulus ✅ Structure optimization + molecular dynamics No backend, just client-side compute. #AI4Science #AI4Materials
English
0
0
5
204
Chi Chen
Chi Chen@chc273·
@BenBlaiszik Absolutely Ben! 10x more bets and perhaps 5x more domains/context one can work on simultaneously
English
0
0
1
72
Ben Blaiszik
Ben Blaiszik@BenBlaiszik·
"I now spend ~90% on problem definition, system design/architecture, validation, and review, and perhaps less than 10% on implementation." This hit the nail on the head... For me this has enabled placing 10x more bets, so my implementation time is ~constant or actually increased!
English
1
0
5
440
Chi Chen
Chi Chen@chc273·
In reflecting on 2025, I keep coming back to how quickly AI moved from “interesting” to genuinely transformative for science and engineering. Over the last few months in particular, I’ve felt a real shift. For the first time, AI agents can reliably take on meaningful research engineering tasks and produce results that are actually useful fairly autonomously after receiving a specific ask. That shift has changed how I spend my time. My workflow has flipped: I now spend ~90% on problem definition, system design/architecture, validation, and review, and perhaps less than 10% on implementation. Not long ago, it was the other way around. It’s been both exciting and humbling. The leverage is real, but the responsibility to verify and steer the work is higher than ever. I’ve also been energized by the momentum in AI for materials science and physical AI, especially seeing so many strong teams and startups forming around real technical problems and the Genesis Mission executive order. It feels like an inflection point, one where the frontier is less about whether these tools can work, and more about how we build trustworthy, repeatable workflows around them. To close the year, I’m sharing a 148-page review paper generated by zero-shot an AI agent I built for fun. This is something I used to daydream about—and now it’s surprisingly accessible. It’s not perfect, but the trajectory is very promising. Moments like this give me a lot of hope that AI will significantly accelerate the pace of science. 2026 will be another big year for AI for Science. Prompt: "Impacts and potentials of quantum computing for artificial intelligence" Paper link: cchen.me/files/quantum_… Blog: cchen.me/posts/2025-12-… #AI4Science #AIAgent #materialsscience #physicalai #quantum
English
1
2
10
1.1K
Chi Chen
Chi Chen@chc273·
Congrats! Polymers are central to materials science, yet AI for materials is still crystal-first. We hope this work helps change that. Proud to support the team these last few years. Also shoutout to Piero @erunzzz and Guillem @guillemsimeon, both on X and quietly impactful.
Gregor Simm@gncsimm

MLFFs 🤝 Polymers — SimPoly works! Our team at @MSFTResearch AI for Science is proud to present SimPoly (SIM-puh-lee) — a deep learning solution for polymer simulation. Polymeric materials are foundational to modern life—found in everything from the clothes we wear and the food we consume to high-performance materials in aerospace, electronics, and medicine. Today, we introduce a new way to simulate them. We built a machine learning force field (MLFF) to predict macroscopic properties across a broad range of polymers—trained only on quantum-chemical data, with no experimental fitting. Specifically, we accurately compute polymer densities via large-scale MD simulations, achieving higher accuracy than classical force fields. We also capture second-order phase transitions, enabling prediction of glass transition temperatures. These two properties are fundamental to processing and application design. Finally, we created a benchmark based on experimental data for 130 polymers plus an accompanying quantum-chemical dataset—laying the foundation for a fully in silico design pipeline for next-generation polymeric materials. The incredible team: Jean Helie, @temporaer, Yicheng Chen, Guillem Simeon, @a_kzna, @ErnestoCheco, @erunzzz, Gabriele Tocci, @chc273, @yatao_li, @SherryLixueC, @zunwang_msr, Bichlien H. Nguyen, Jake A. Smith, and Lixin Sun. 📄 Preprint: arxiv.org/abs/2510.13696 ⚙️ Data and code release: in progress⏳ #MLFFs #Polymers #AIforScience #DeepLearning #SimPoly #ScientificML #Microsoft #MicrosoftResearch #MicrosoftQuantum

English
0
1
15
1.4K
Chi Chen
Chi Chen@chc273·
Pleasantly surprised to receive an actual medal for our work on #Niobium. Nb never ceases to fascinate me as a versatile element for advanced materials. This project explored disordered rock-salt Nb oxides as a negative electrode for Li-ion batteries. Original paper: nature.com/articles/s4156…
Chi Chen tweet media
English
0
0
11
617
Ekin Dogus Cubuk
Ekin Dogus Cubuk@ekindogus·
I am excited to announce what @LiamFedus and I have been working on: @periodiclabs, a world class team of experimentalists, theorists, and LLM experts. Scientific discovery is inherently an out-of-domain task. Experimental iteration is required for significant advances, regardless of the form of intelligence that is modeling the world. We are building experimental labs that will unlock the next frontier for LLM reasoning. Deeply grateful to our advisory board, Prof. Carolyn Bertozzi, Prof. Mercouri Kanatzidis, Prof. Steven Kivelson, Prof. Zhi-Xun Shen, and Prof. Chris Wolverton, for their guidance and support.
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
35
27
313
73.7K
Ben Blaiszik
Ben Blaiszik@BenBlaiszik·
@chc273 Not yet (that I know of), seems like a great sub project though!
English
1
0
2
169
Chi Chen
Chi Chen@chc273·
A lot of data to play with, anyone working on downsampling to create various sizes?
Ben Blaiszik@BenBlaiszik

🔥 Today we announce the Meta OMol25 Electronic Structures Dataset - 500 TB of molecular data in collaboration with @mshuaibii and team at @AIatMeta. We envision a future where researchers can rapidly design molecules and peptides to treat diseases, discover catalysts to revolutionize synthesis and manufacturing, identify the next electrolyte to store and transport energy to protect the grid, and more. But these breakthrough discoveries require data. Data to train next-generation AI models and interatomic potentials. Data to push the boundaries of what's computationally possible in molecular chemistry and lead the world in AI for science. Data that captures the full complexity of chemical systems, from small organic molecules to massive biomolecular complexes. The OMol25 Electronic Structures dataset includes the raw DFT outputs, electronic densities, wavefunctions, and molecular orbital information for over 4M million high-accuracy quantum chemical calculations. We see this as a transformative opportunity to develop higher quality partial charges, partial spins, and advanced electronic features to unlock the next generation of physics-informed ML models. The Materials Data Facility is proud to make these data available via the Eagle cluster at ALCF through a high-performance Globus endpoint. Given the dataset's unprecedented scale, we're first releasing all output data for a 4M random OMol25 split, with the full multi-petabyte dataset following based on community engagement. For this first release, the data are quite raw, and as-created by the Meta team. There's a significant opportunity for the community to build tools that simplify access to these data, allow data query and browsing, create databases of calculated properties and descriptors, and much more. We intend to work on these topics with all of you. We can't wait to see what you can do with these data! Access Details: github.com/facebookresear… Eagle was pioneered as the Petrel project, a new way to provide researchers access to high-quality, high-volume data by Ian Foster, Rachana Ananthakrishnan, Kyle Chard, Michael Papka, Rick Stevens, and others. Globus.org provides core platform capabilities (auth, data transfer, workflow automation, and compute) to over 600k researchers. Thanks to support from NIST and James Warren for making the MDF vision of vast troves of open data to fuel discovery possible. @mshuaibii, @zackulissi , @argonne, @argonne_lcf

English
3
0
9
806
Chi Chen
Chi Chen@chc273·
What @xprize does is truly transformational. Honored to have joined the Deep Tech Brain Trust since May, contributing to AI & materials science innovation in service of XPRIZE’s mission to inspire and empower humanity toward an abundant, equitable future. Exciting times ahead!
XPRIZE@xprize

We’re thrilled to share the release of our 2025 XPRIZE Impact Report. 🚀From carbon removal to space exploration, our prizes prove philanthropy can deliver 60x ROI in global impact. Learn how we're building the #BusinessOfBreakthroughs: xprize.org/2025-impact-re…

English
0
0
2
548
Olexandr Isayev 🇺🇦🇺🇸
Latest @ChemRxiv preprint! Most of MLIPs do not distinguish between different spin states, making them unsuitable for open-shell reactive chemistry. We present AIMNet2-NSE (Neural Spin-charge Equilibration), a #machinelearning potential that incorporates spin-charge equilibration for accurate treatment of molecules and reactions with arbitrary charge and spin multiplicities. #compchem chemrxiv.org/engage/chemrxi…
Olexandr Isayev 🇺🇦🇺🇸 tweet media
English
3
16
62
4.3K
Chi Chen
Chi Chen@chc273·
Same prompt on one H100 NVL runs with 550 tok/s input and 65 tok/s output. Not bad. The model does seem to hallucinate more than closed top models, but still great progress. Will be exciting to see how the science community will leverage the model
Chi Chen@chc273

Mind-blowing that anyone can now run o3-level models locally within minutes. Testing gpt-oss:120b on 2x A100 SXM4: ~370 tok/s prompt processing, ~49 tok/s generation. Still more expensive than API calls, but the accessibility is game-changing

English
0
0
7
571
Chi Chen
Chi Chen@chc273·
Mind-blowing that anyone can now run o3-level models locally within minutes. Testing gpt-oss:120b on 2x A100 SXM4: ~370 tok/s prompt processing, ~49 tok/s generation. Still more expensive than API calls, but the accessibility is game-changing
Chi Chen tweet media
English
1
0
6
932