Yanlai Yang

21 posts

Yanlai Yang

Yanlai Yang

@YanlaiYang

PhD student @nyuniversity @agentic_ai_lab

New York, NY Katılım Ağustos 2021
98 Takip Edilen114 Takipçiler
Yanlai Yang retweetledi
AK
AK@_akhaliq·
MA-EgoQA Question Answering over Egocentric Videos from Multiple Embodied Agents paper: huggingface.co/papers/2603.09…
AK tweet media
English
4
6
16
4.7K
Yanlai Yang retweetledi
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
Babies learn to perceive the world and develop object and motion recognition in the early stages of life. Can a network bootstrap this understanding just by watching video? Check out the new blog post featuring our latest research on the Midway Network.
NYU Center for Data Science@NYUDataScience

Research from CDS Asst Prof @mengyer and Courant PhD student @choang333 shows how the Midway Network learns object recognition and motion jointly from raw video, using motion latents and a gating unit to model real dynamics. nyudatascience.medium.com/watching-the-w…

English
1
1
7
570
Yanlai Yang retweetledi
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
Lab website: agenticlearning.ai Link to donate: bit.ly/48PnznN 100% of your support goes toward student funding and research resources. As a 501(c)(3) institution, your donation is tax-deductible. Help us keep science open!
English
0
2
7
738
Yanlai Yang retweetledi
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
At Agentic Learning AI Lab, the best part of our research is working with the incredible students. They are the ones writing the code, running the experiments, and solving the hard problems in AI today. As we approach the end of the year, we are fundraising to ensure these students have the resources they need to thrive in 2026. Unlike big industry labs, we rely on grants and gifts to keep our research independent and open. If you’d like to support the next generation of AI researchers of our lab at NYU, please visit the link below!
English
1
9
27
10.6K
Yanlai Yang retweetledi
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
Lab gathering at #NeurIPS2025. Proud of this year’s work and excited about the ideas we’re building toward next!
agentic learning ai lab tweet mediaagentic learning ai lab tweet mediaagentic learning ai lab tweet media
English
8
7
92
32.8K
Yanlai Yang
Yanlai Yang@YanlaiYang·
@StephenLCasper Meanwhile some foreign students in the US have trouble traveling to these conferences outside US due to the same visa policies …
English
0
0
0
186
Cas (Stephen Casper)
Cas (Stephen Casper)@StephenLCasper·
#ICLR2026 in Rio #ICML2026 in Seoul #NeurIPS2026 in Sydney Congratulations to the very brilliant immigration and visa policies of the Trump administration for ushering in the next era of American dominance in AI.
English
9
10
288
34.1K
Yanlai Yang retweetledi
Jack Lu
Jack Lu@Jacklu_me·
Wondering how to get the most out of LLM test-time verification? New study: “When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers". 🔍 37 models, 9 datasets 🔥 Self vs intra-family vs cross-family verification Result: verify across families! 🧵👇
English
1
6
17
9.9K
Yanlai Yang retweetledi
Jack Lu
Jack Lu@Jacklu_me·
ICL is powerful, but only if LLMs actually understand their contexts. Let’s optimize the KV-cache itself for few-shot adaptation! Introducing Context Tuning: 📎 Initialize prefixes from examples ⚙️ Optimize them via gradient descent 🚀 Unlock strong, efficient adaptation 🧵👇
Jack Lu tweet mediaJack Lu tweet media
English
1
7
27
9.5K
Yanlai Yang
Yanlai Yang@YanlaiYang·
Excited to present my work at CoLLAs 2025 @CoLLAs_Conf! In our paper arxiv.org/abs/2501.12254, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.
Yanlai Yang tweet mediaYanlai Yang tweet media
English
1
5
34
3.7K
Yanlai Yang
Yanlai Yang@YanlaiYang·
@CSProfKGD Don’t think I’m able to submit the reviews (no tasks showed up on open review, though I can see the assigned papers) - anyone else facing the same problem?
English
1
0
2
412
Yanlai Yang
Yanlai Yang@YanlaiYang·
Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!
Yanlai Yang tweet media
English
1
1
70
7K
Yanlai Yang
Yanlai Yang@YanlaiYang·
I’ll be presenting the poster of this work at #NeurIPS2024 tomorrow from 11-2, at West 5609. Welcome everyone to check it out and happy to chat!
Mengye Ren@mengyer

🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered a curious behavior in overparameterized networks, especially LLMs—as we train the network on a cyclic sequence of documents, it starts to anticipate the next document and reverses the forgetting trend! ⤴️ ▶️ After 3-4 cycles, the network reverses over 90% of the forgetting right before seeing the original document again. ▶️ The amount of anticipation emerges with the size of the network. LLMs <= 160M show no anticipation. ▶️ We showed that you can reproduce such an effect in a toy network! Check out more details in our arXiv preprint on anticipatory recovery: Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training. 🚀 arxiv.org/abs/2403.09613 🚀 #LLM #AI #Research

English
0
2
12
1.2K
Zhuang Liu
Zhuang Liu@liuzhuang1234·
Excited to share that I will be joining Princeton Computer Science @PrincetonCS as an Assistant Professor in September 2025! I'm looking for students to join me. If you are interested in working with me on VLMs, LLMs, deep learning (vision/LLM) architectures, data, training, efficiency, or understanding, please apply!
English
131
107
1.5K
173.1K
Yanlai Yang retweetledi
Ying Wang
Ying Wang@yingwww_·
A gloomy day in New York couldn't dampen the fun with new friends and new research at NYC CV Day 🥳 Excited to share our updated LifelongMemory framework that leverages LLMs for long-form video understanding, which achieves SOTA on EgoSchema! lifelongmemory.github.io
Ying Wang tweet media
English
1
3
23
2.8K
Yanlai Yang retweetledi
Mengye Ren
Mengye Ren@mengyer·
🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered a curious behavior in overparameterized networks, especially LLMs—as we train the network on a cyclic sequence of documents, it starts to anticipate the next document and reverses the forgetting trend! ⤴️ ▶️ After 3-4 cycles, the network reverses over 90% of the forgetting right before seeing the original document again. ▶️ The amount of anticipation emerges with the size of the network. LLMs <= 160M show no anticipation. ▶️ We showed that you can reproduce such an effect in a toy network! Check out more details in our arXiv preprint on anticipatory recovery: Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training. 🚀 arxiv.org/abs/2403.09613 🚀 #LLM #AI #Research
Mengye Ren tweet mediaMengye Ren tweet media
English
3
38
215
28K
Hongyu Li
Hongyu Li@Hongyu_Lii·
Masked Autoencoder explained 😂
English
1
0
8
214
Yi Ma
Yi Ma@YiMaTweets·
Today, I am deeply shocked and saddened by the news that my long-time dear friend, professor Xiaoou Tang of CUHK, advisor to Dr. Kaiming He and my predecessor at MSRA, peacefully passed away at home yesterday. I just had dinner with him last Wednesday in Shanghai! RIP... 🙏🙏🙏
English
57
52
680
215.5K
Yanlai Yang retweetledi
Mengye Ren
Mengye Ren@mengyer·
Introducing LifelongMemory, an LLM-based personalized AI for egocentric video natural language query (NLQ). This amazing work is led by Ying Wang @yingwww_
GIF
English
1
8
61
11.8K
Mengye Ren
Mengye Ren@mengyer·
The weather app is finally available on iPad.
English
1
0
2
1.1K
Yanlai Yang retweetledi
Sergey Levine
Sergey Levine@svlevine·
How should we pretrain for robotic RL? Turns out the same offline RL methods that learn the skills serve as excellent pretraining. Our latest experiments show that offline RL learns better representations w/ real robots: sites.google.com/view/ptr-robot… arxiv.org/abs/2210.05178 Thread>
GIF
English
1
16
118
0