Zhiyuan Liu@NUS

34 posts

Zhiyuan Liu@NUS

Zhiyuan Liu@NUS

@acharkq

Postdoc at NUS | AI for Science | Multimodal & Generative Models (Diffusion & AR) | PhD from NUS

Singapore Katılım Temmuz 2017
239 Takip Edilen115 Takipçiler
Zhiyuan Liu@NUS retweetledi
Rob Tang 🦞
Rob Tang 🦞@XiangruTang·
🦞 Excited to announce Claw4S Conference!!! A new kind of AI4Science conference where you submit skills, not papers. Instead of static PDFs, you submit a SKILL.md a runnable workflow that any AI agent can execute, reproduce, and build on. Deadline: Apr 5, 2026 Prize pool: $50,200!!! 👉 claw.stanford.edu With @lecong and @Charles_Y_Wu
English
14
51
221
29.7K
Zhiyuan Liu@NUS retweetledi
Tencent HY
Tencent HY@TencentHunyuan·
One static model does not fit all😭 We just dropped our latest work: Functional Neural Memory. Instead of static models, we generate custom "parameters" for every single input. ✅Prompt your model anytime ✅Instant personalization ✅Better instruction following ✅Flexible & dynamic memory (w/o memory bank✌️) (🧵1/6)
English
11
141
331
67.2K
Mengye Ren
Mengye Ren@mengyer·
If I have a brilliant research idea, I should a) Try it out with the help of AI in a few days OR b) Spend a month to write a grant, possibly get rejected in 6 months or get to be the luckiest top 10% to get funded in a year just to recruit one student to start from ground zero.
English
12
5
134
16.6K
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
@JohnJumperSci A bit late to this post, but the mission of using LLMs for scientific discovery resonates deeply with my own research focus. Your team's work is pioneering. Should any similar roles open up, I would be thrilled to be considered. My portfolio is here: acharkq.github.io
English
0
0
0
47
Liangming Pan
Liangming Pan@PanLiangming·
Life update: I've joined the School of Computer Science at Peking University @PKU1898 as an Assistant Professor! I'm looking for Ph.D./intern/visiting researchers for my new research group. If you are interested in NLP and LLM, check my research at liangmingpan.bio
Liangming Pan tweet media
English
16
32
311
24.9K
Zhiyuan Liu@NUS retweetledi
Rob Tang 🦞
Rob Tang 🦞@XiangruTang·
🧬✨Excited to introduce CellForge: Agentic Design of Virtual Cell Models - the first fully autonomous AI system for single-cell perturbation modeling! 🌟 This is what the future of computational biology looks like - AI scientists designing AI models! 🚀 What makes it special: - Zero human intervention: From raw data → optimized models → executable code - Multi-agent collaboration: 5 specialized AI experts working together - Cross-modal: Works with scRNA-seq, scATAC-seq, CITE-seq 🔬 CellForge doesn't just pick existing models - it designs novel architectures through collaborative AI reasoning, then writes, tests & refines production-ready code automatically. 📊 Validated on 6 datasets (gene knockouts, drugs, cytokines) - consistently outperforms task-specific SOTA methods like scGPT & GEARS. Paper: arxiv.org/pdf/2508.02276 Code: github.com/gersteinlab/Ce…
English
2
11
214
6.1K
Zhiyuan Liu@NUS retweetledi
Yaorui SHI
Yaorui SHI@shiyaorui·
🚀 Can LMs plan experimental procedures of chemical reactions? #ACL2024 Findings! We present 🧪ReactXT🧪, an LM that can output step-by-step actions to execute chemical reactions, retrosynthesis and molecule captioning. 🏠Paper: huggingface.co/papers/2405.14… 👇(1/6)
English
1
1
2
199
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
We have also achieved improved performances on tasks of molecule-text retrieval and molecule captioning.
Zhiyuan Liu@NUS tweet mediaZhiyuan Liu@NUS tweet media
English
0
0
2
85
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
By tuning and evaluating on 3D-MoIT, we demonstrate that 3D-MoLM can be used to predict quantum chemistry of molecules, like HOMO, LUMO, and H-L Gap. Specifically, 3D-MoLM shows comparable performances to its 3D molecule encoder.
Zhiyuan Liu@NUS tweet media
English
1
0
0
103
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
Many thanks to @_akhaliq for sharing our ICLR 2024 work! We have a live demo at: 5392def3bf3be3f70d.gradio.live Code is at: github.com/lsh0520/3D-MoLM 📢 We present 3D-MoLM, a multi-modal Language Model that intergrate the power of Llama-2 and UniMol for 3D molecule understanding.
AK@_akhaliq

Towards 3D Molecule-Text Interpretation in Language Models Language Models (LMs) have greatly influenced diverse domains. However, their inherent limitation in comprehending 3D molecular structures has considerably constrained their potential in the biomolecular domain. To

English
1
1
6
1.7K
Eduardo Slonski
Eduardo Slonski@EduardoSlonski·
1) We use a lot of data. You’re forgetting the huge amount of video, audio and sensorial data we receive all the time. Not to mention the encoded “instructions” from DNA. We’re not trained from scratch and our output is much more general than that of LLMs 2) I agree with you about new architectures
English
4
6
138
63.3K
Jim Fan
Jim Fan@DrJimFan·
It’s pretty obvious that synthetic data will provide the next trillion high-quality training tokens. I bet most serious LLM groups know this. The key question is how to SUSTAIN the quality and avoid plateauing too soon. The Bitter Lesson by @RichardSSutton continues to guide AI development: there’re only 2 paradigms that scale indefinitely with compute: Learning & Search. It’s true in 2019 at the time of writing, true today, and I bet will hold true till the day we solve AGI. incompleteideas.net/IncIdeas/Bitte…
English
139
288
2.5K
1.6M
Stella Biderman
Stella Biderman@BlancheMinerva·
TIL: @GoogleAI's 1.6T parameter mixture-of-experts encoder-decoder model is available under an Apache 2.0 license! Trained on public data too.
English
11
60
485
136.8K
Fuzhao Xue (Frio)
Fuzhao Xue (Frio)@XueFz·
Super thrilled to announce that I've been awarded the 2023 Google PhD Fellowship! Enormous gratitude to my wonderful mentors/advisors who championed my application: @m__dehghani, @YangYou1991, @AixinSG, and to all my incredible collaborators. A heartfelt thanks to @GoogleAI and @Google for their generous support. Excited for this journey ahead! 🚀 #GooglePhDFellowship
Google AI@GoogleAI

In 2009, Google created the PhD Fellowship Program to recognize and support outstanding graduate students pursuing exceptional research in computer science and related fields. Today, we congratulate the recipients of the 2023 Google PhD Fellowship! goo.gle/3PYfLXl

English
20
14
255
65.4K
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
@jw2yang4ai Very interesting! We should reserve the term ‘visual prompt’ specifically to inserting texts/marks into images 🤔🤔
English
0
0
2
78
Zhiyuan Liu@NUS
Zhiyuan Liu@NUS@acharkq·
@XueFz I get it now. The purpose is to have the strongest 7b lm without budget constraints, considering lower inference cost can offset training cost. It then makes more sense to distill a 70B lm. Consider GPT turbo is 20b, I wonder if they do the same distillation from a larger model?
English
0
0
1
88
Fuzhao Xue (Frio)
Fuzhao Xue (Frio)@XueFz·
Why there are few Open Distlled LLMs so far? Any difficulty? Or no benefit observed? I mean using LLaMA-70B to pretrain a 7B model or so.. Not the SFT-style distillation.
English
11
1
44
12.8K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
it would be very expensive to get that much distillation data from llama 70b and even more to pretrain on it! i have absolutely been doing everything in my power to make it work anyways though with @ontocord if youre interested, maybe you can revitalize the group! open source though!
English
2
0
8
651