Jennifer J. Sun

241 posts

Jennifer J. Sun banner
Jennifer J. Sun

Jennifer J. Sun

@JenJSun

AI for Scientists, assistant professor @CornellCIS, part-time @GoogleDeepMind

New York Katılım Ağustos 2020
411 Takip Edilen1.4K Takipçiler
Jennifer J. Sun retweetledi
Yoav Artzi
Yoav Artzi@yoavartzi·
.@Cornell is recruiting for multiple postdoctoral positions in AI as part of two programs: Empire AI Fellows and Foundational AI Fellows. Positions are available in NYC and Ithaca. Deadline for full consideration is Nov 20, 2025! academicjobsonline.org/ajo/jobs/30971
Yoav Artzi tweet media
English
2
38
125
58.3K
Yisong Yue
Yisong Yue@yisongyue·
Fun facts about @JenJSun 1) She is Canadian, but not the cool kind that speaks French (shots fired?) 2) She likes collecting Polaroid memories (tinyurl.com/3ze6lD) 3) She has spent hours watching mice sniffing and biting each other #AI4Science
LM4SCI @ COLM2025@lm4sci

📅 TOMORROW is LM4Sci #COLM2025! ⚡🔬 Are you excited? Today's spotlight: Jennifer Sun (Cornell, Google DeepMind) @JenJSun on Accelerating Knowledge & Discovery in Scientific Workflows 🧵

English
2
2
22
7.3K
Jennifer J. Sun retweetledi
Yana Hasson
Yana Hasson@yanahasson·
Thrilled to share our latest work on SciVid, to appear at #ICCV2025! 🎉 SciVid offers cross-domain evaluation of video models in scientific applications, including medical CV, animal behavior, & weather forecasting 🧪🌍📽️🪰🐭🫀🌦️ #AI4Science #FoundationModel #CV4Science [1/5]🧵
Yana Hasson tweet media
English
1
9
30
2.8K
Jennifer J. Sun retweetledi
Yoav Artzi
Yoav Artzi@yoavartzi·
Check out our LMLM, our take on what is now being called a "cognitive core" (as far as branding go, this one is not bad) can look like, how it behaves, and how you train for it. arxiv.org/abs/2505.15962
Andrej Karpathy@karpathy

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

English
2
7
34
7.1K
Jennifer J. Sun retweetledi
Atharva Sehgal
Atharva Sehgal@atharva_sehgal·
I’m presenting Escher (trishullab.github.io/escher-web) at #CVPR2025 Saturday morning (Poster Session #3; #236). Escher builds a visual concept library with a vision‑language critic (no human labels needed). Swing by if you’d like to chat about program synthesis & multimodal reasoning!
Atharva Sehgal tweet media
English
2
5
18
2K
Jennifer J. Sun retweetledi
Ting Liu
Ting Liu@_tingliu·
After over 15 months, we are excited to finally release VideoPrism! The model comes in two sizes, Base and Large, and the video encoders are available today at github.com/google-deepmin…. We are also working towards adding more support into the repository, please stay tuned.
Google AI@GoogleAI

Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at goo.gle/49ltEXW

English
0
1
8
942
Jennifer J. Sun retweetledi
Linxi Zhao
Linxi Zhao@linxizhao4·
🚀Excited to share our latest work: LLMs entangle language and knowledge, making it hard to verify or update facts. We introduce LMLM 🐑🧠 — a new class of models that externalize factual knowledge into a database and learn during pretraining when and how to retrieve facts instead of memorizing them. 🧠Why LMLM? • Learning to look up facts is easier than memorization • Externalizing knowledge improves factual precision • Enables instant machine unlearning by design LMLM opens new directions for how future language models can manage and access knowledge. 📄 [ArXiv] arxiv.org/pdf/2505.15962 🌐 [Project Page] linxi-zhao.github.io/LMLM-site/ 💻 [Code] github.com/kilian-group/L… 🎤 [Talk] simons.berkeley.edu/talks/kilian-w… Huge thanks to my amazing collaborators: @linxizhao4 @sofianzalouk Christian Belardi Justin Lovelace @JinPZhou And to our incredible advisors @KilianQW, @yoavartzi, and @JenJSun for their generous support and insight.
English
1
13
43
6K
Jennifer J. Sun retweetledi
Rogério Guimarães
Rogério Guimarães@rogerioagjr·
We're excited to share our latest work! We achieve SOTA results in segmentation, detection, and depth estimation, in single and cross-domain, by exploiting image-aligned text prompts in a pretrained diffusion backbone repurposed for vision tasks. See vision.caltech.edu/tadp/ 🧵👇
Rogério Guimarães tweet media
English
4
30
179
35.9K
Rito
Rito@AllesistKode·
@yisongyue @JenJSun Hi @JenJSun , can I read your thesis somewhere online? And congrats on your PhD! Best wishes for your next steps.
English
1
0
1
291
Jennifer J. Sun retweetledi
Ann Kennedy
Ann Kennedy@Antihebbiann·
Won't you be my neighbor? Northwestern Neuroscience in downtown Chicago is running a broad faculty search: nature.com/naturecareers/… Come join a large and growing neuroscience community!
Ann Kennedy tweet media
English
0
45
83
23.9K
Jennifer J. Sun
Jennifer J. Sun@JenJSun·
Huge thanks to additional co-authors: Andrew Ulmer who helped develop our benchmark, @__dipam__ for developing the eval framework, and MABe22 Challenge winners Ed Hayes, Heng Jia, Sebastian Oleszko, Zach Partridge, Milan Peelman, Chao Sun, Param Uttarwar, and Eric Werner!😊
English
0
0
3
667
Jennifer J. Sun
Jennifer J. Sun@JenJSun·
We are presenting our MABe22 dataset at ICML! Our dataset studies representation learning of video and trajectory data - the representations are evaluated on a large set of downstream tasks. MABe22 organisms include mice, flies, and beetles! Paper: arxiv.org/pdf/2207.10553…
Jennifer J. Sun tweet media
English
3
13
54
20.3K
Jennifer J. Sun retweetledi
Amil Dravid
Amil Dravid@_AmilDravid·
Presenting BKiND-3D tomorrow with @JenJSun and Lili Karashchuk tomorrow (Wed., June 21) in the morning session at poster #74. Feel free to stop by!
Jennifer J. Sun@JenJSun

All animals behave in 3D - we discover 3D poses directly from multi-view videos without requiring annotations. Essentially videos -> 3D keypoints + connections We will be @CVPR on June 21! BKinD-3D Paper: arxiv.org/abs/2212.07401 Co-first-authors Lili Karashchuk & @_AmilDravid

English
2
2
22
2.6K