J.Nathan Yan

270 posts

J.Nathan Yan

J.Nathan Yan

@NathanYan2012

Research Scientist @GoogleDeepMind. Ph.D. from @CornellCIS/@cornell_tech. I bake my own opinions.

New York Entrou em Temmuz 2014
1.6K Seguindo852 Seguidores
J.Nathan Yan
J.Nathan Yan@NathanYan2012·
@jxmnop the blog has a different avatar, thought it was a scam
English
0
0
0
174
dr. jack morris
dr. jack morris@jxmnop·
new blog: How to scale RL to 10^26 FLOPs everyone is trying to figure out the right way to scale reasoning with RL ilya compared the Internet to fossil fuel: it may be the only useful data we have. and it's expendable perhaps we should learn to reason from The Internet (not just math and code)
dr. jack morris tweet mediadr. jack morris tweet media
English
18
53
553
89.2K
J.Nathan Yan retweetou
Yucheng Lu
Yucheng Lu@_yucheng_lu·
🔥Thrilled to share that I’ll be joining the Computer Science Department at NYU Shanghai as an Assistant Professor starting Fall 2025! @nyushanghai 🎯 I’ll be recruiting PhD students across the entire NYU network—including @nyushanghai, @nyutandon, and @NYU_Courant—to build efficient ML systems (algorithms, models, kernels, and more). I’ll also be hosting multiple RAs and interns (remote friendly). If you're interested, DMs are open! ✉️
English
8
4
119
24K
J.Nathan Yan retweetou
dr. jack morris
dr. jack morris@jxmnop·
# Embeddings are underrated (2024) just a really excellent piece of technical writing.
dr. jack morris tweet media
English
23
97
1.5K
144K
J.Nathan Yan retweetou
Jiatao Gu
Jiatao Gu@thoma_gu·
I will be attending #ICLR2025 in person during Apr 24-28, and presenting our research: DART: Denoising Autoregressive Transformer 📌Fri 25 Apr 3 p.m. +08 — 5:30 p.m. +08 This is my first time visiting Singapore, and I am looking forward to chatting with old and new friends!
Jiatao Gu@thoma_gu

🚀Excited to introduce our recent work @ AppleMLR -- DART: Denoising AutoRegressive Transformer for Scalable Text-to-Image Generation! A transformer-based model that unifies Autoregressive and Diffusion with a non-Markovian diffusion framework: 🔗 arxiv.org/abs/2410.08159 (1/n)

English
2
11
78
16.6K
J.Nathan Yan retweetou
Karan Dalal
Karan Dalal@karansdalal·
Today, we're releasing a new paper – One-Minute Video Generation with Test-Time Training. We add TTT layers to a pre-trained Transformer and fine-tune it to generate one-minute Tom and Jerry cartoons with strong temporal consistency. Every video below is produced directly by the model in a single shot, without editing, stitching, or post-processing. Every story is newly created. Demos: test-time-training.github.io/video-dit/ Paper: test-time-training.github.io/video-dit/asse…
English
179
909
5.4K
1.4M
Justin T Chiu
Justin T Chiu@justintchiu·
@jxmnop 1. AR can be expressed as masked diffusion. MDLM with a position-dependent, L2R schedule = AR with recomputation of unmasked representations 2. proper use of more flops *does* improve acc, assuming fixed # params: thinking tok, iterative repair. probs not in lim param->inf
English
2
0
5
628
dr. jack morris
dr. jack morris@jxmnop·
is there a first-principles explanation for why text diffusion models should outperform autoregressive models in the limit (parameters, compute, and data)? maybe something like: proper use of arbitrary flops should eventually make all generations better. is that even true?
English
45
10
284
43.2K
J.Nathan Yan retweetou
Sasha Rush
Sasha Rush@srush_nlp·
Some personal news: I recently joined Cursor. Cursor is a small, ambitious team, and they’ve created my favorite AI systems. We’re now building frontier RL models at scale in real-world coding environments. Excited for how good coding is going to be.
English
140
83
2.9K
335.7K
J.Nathan Yan retweetou
Hamish Ivison
Hamish Ivison@hamishivi·
How well do data-selection methods work for instruction-tuning at scale? Turns out, when you look at large, varied data pools, lots of recent methods lag behind simple baselines, and a simple embedding-based method (RDS) does best! More below ⬇️ (1/8)
Hamish Ivison tweet media
English
4
66
325
86.2K
J.Nathan Yan retweetou
Songlin Yang
Songlin Yang@SonglinYang4·
I've uploaded the latest slides & beamer source code to github.com/sustcsonglin/l…. Hopefully this repository will help train an LLM that generates Beamer slides better than I do :)
Sasha Rush@srush_nlp

Linear Attention and Beyond: Interactive Tutorial with Songlin Yang (@SonglinYang4 MIT/Flash Linear Attention) I didn’t follow some of the recent results, so I zoomed Songlin and she explained it all to me for two hours 😂 youtu.be/d0HJvGSWw8A

English
4
24
199
20.2K
J.Nathan Yan retweetou
Songlin Yang
Songlin Yang@SonglinYang4·
Introducing the first open-source implementation of native sparse attention: github.com/fla-org/native…. Give it a spin and cook your NSA model! 🐳🐳🐳
English
10
118
756
72.4K
J.Nathan Yan retweetou
Sasha Rush
Sasha Rush@srush_nlp·
Got talked into giving a DeepSeek talk this afternoon #simons-tabs" target="_blank" rel="nofollow noopener">simons.berkeley.edu/workshops/llms… Not sure I have anything new to say here! But good excuse for me to read all the blogs.
Sasha Rush tweet media
English
10
49
464
37.3K
J.Nathan Yan retweetou
J.Nathan Yan retweetou
dr. jack morris
dr. jack morris@jxmnop·
spent the last month building my own framework to train a diffusion model from scratch. it was hard almost like i just learned to cast an ancient spell that requires lots of mysterious steps and ingredients. for a long time i was trying, and nothing happened. but when it worked it felt like magic i've learned a lot so wanted to share a bit 🧵 - i'm doing *conditional* diffusion, trying to produce outputs x that depend on some inputs y. my biggest blocker was that the architectural biases matter here – you can NOT put the conditioning directly into the input, or the model will just learn to map y to x instead of using y to denoise the noisy input x. (the loss will go down but sampling will not work) - thus the diffusion world has a zoo of "conditional" architectures that can be a little challenging to adapt for your problem. but you have to use one or else things just won't work - apparently, architecture still matters in vision (sad). initialization, residuals, and extra normalization can make all the difference - learning a small "probe" alongside your diffusion model is hugely valuable. you can just cut the gradients to the probe so that it doesn't affect training. this way you will know when you beat the baseline. (i'm not sure if this is common practice but it was invaluable for me) - you need to incorporate sampling into training every-so-often. otherwise you will never figure out why your model doesn't work - the normalization is super important. your input data needs to have ~mean 0 or std 1. otherwise learning might not work, or will be super slow - in diffusion a lot of things can have the same shape but be different "types" in the sense that they're incompatible in some way. easy to make these bugs and the code will still run. and you often can find them by checking that the norms, stds, and means are approximately correct - complex systems that you write from scratch will inevitably have tons of bugs. you can start with trying to learn the identity function (in diffusion just set the noise to zeros). if you can't do this something is broken. in my case this helped me realize one of my losses had a sign flipped - in my opinion the loss after 1000 steps or so is usually a reliable signal for debugging architectural changes - diffusion people look down on DDPM as old and outdated but turns out it's still "good enough for government work" and worked fine for me eventually - wouldn't recommend the diffusers library. not sure it's really being developed anymore. heard the openai impl is much better - in general building systems from scratch is a slow and frustrating way to do research and i would recommend most people just start with a good codebase and tweaking it to fit your problem. but if you build everything yourself you will learn a lot and feel a deep sense of satisfaction when it all starts working :)
English
33
37
554
35.2K
J.Nathan Yan retweetou
Umar Jamil
Umar Jamil@hkproj·
In this video, I'll be deriving and coding Flash Attention from scratch. No prior knowledge of CUDA or Triton is required. Link to the video: youtu.be/zy8ChVd_oTM All the code will be written in Python with Triton, but no prior knowledge of Triton is required. I'll also explain the CUDA programming model from zero. I'll explore the following topics: * Review of Multi-Head Attention * Safe Softmax * Online Softmax (with proof!) * Introduction to GPUs and the CUDA programming model * Tensor layouts: row-major layout, stride, reshape, transpose * Block Matrix Multiplication * Introduction to Triton * Forward pass of Flash Attention in Triton * How Autograd works * What are derivatives, gradients, and Jacobians * Jacobian of the Matrix Multiplication operation * Jacobian of the Softmax operation * Backwards pass of Flash Attention in Triton * Triton tricks: Software pipelining If you find this video useful, consider subscribing to my channel and sharing the video within your network of friends and colleagues. #flashattention #triton #cuda #tutorial #python #attention #transformers #deeplearning
YouTube video
YouTube
English
46
285
2.3K
421.1K
J.Nathan Yan retweetou
Denny Zhou
Denny Zhou@denny_zhou·
The most beautiful thing on LLM reasoning is that the thought process is generated in an autoregressive way, rather than relying on search (e.g. mcts) over the generation space, whether by a well-finetuned model or a carefully designed prompt.
English
29
57
644
90.5K
J.Nathan Yan retweetou
Zhuang Liu
Zhuang Liu@liuzhuang1234·
How far is an LLM from not only understanding but also generating visually? Not very far! Introducing MetaMorph---a multimodal understanding and generation model. In MetaMorph, understanding and generation benefit each other. Very moderate generation data is needed to elicit visual generation from an LLM, when trained jointly with visual understanding.
Zhuang Liu tweet media
English
25
133
718
253.3K
J.Nathan Yan
J.Nathan Yan@NathanYan2012·
Experience Gemini 2.0 Flash Thinking—the fast and transparent reasoning model that reveals its thought process in real-time! This breakthrough brings us one step closer to deeper, more reliable AI understanding. Try it now!
Jeff Dean@JeffDean

Introducing Gemini 2.0 Flash Thinking, an experimental model that explicitly shows its thoughts. Built on 2.0 Flash’s speed and performance, this model is trained to use thoughts to strengthen its reasoning. And we see promising results when we increase inference time computation!

English
0
0
4
483