Daksh Aggarwal

12 posts

Daksh Aggarwal

Daksh Aggarwal

@dakshces

Math PhD student @BrownUniversity

Katılım Şubat 2024
65 Takip Edilen24 Takipçiler
Daksh Aggarwal retweetledi
Nate Gillman
Nate Gillman@GillmanLab·
Ever wish you could tell a video model what to achieve, rather than just how to move? Introducing our CVPR 2026 paper, Goal Force! Instead of simulating a direct push, our model plans the entire causal chain (the "how") to achieve your specified goal (the "what"). 🧵(1/n)
English
3
15
64
9.1K
Daksh Aggarwal retweetledi
Chen Sun
Chen Sun@jesu9·
Can you inject causal control to models trained solely on video data? How about eliciting their understanding of physical forces when animating a photo? Nate's latest project shows that both are promising directions to explore! I tried it on my favorite photo, and you should too! github.com/brown-palm/for…
Nate Gillman@GillmanLab

Ever wish you could turn your video generator into a controllable physics simulator? We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. 🧵(1/n)

English
2
5
51
6K
Daksh Aggarwal retweetledi
Nate Gillman
Nate Gillman@GillmanLab·
Ever wish you could turn your video generator into a controllable physics simulator? We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. 🧵(1/n)
English
8
66
318
44.3K
Daksh Aggarwal retweetledi
François Chollet
François Chollet@fchollet·
People who treat prediction markets as some kind of magical oracle are hopelessly lost. Even the valuation of very high-liquidity public stocks is anything but efficient. Now, election bets? Forget about it.
English
19
7
206
20.2K
Daksh Aggarwal retweetledi
François Chollet
François Chollet@fchollet·
Remember that prediction markets are very low-liquidity -- with a couple M$ you can clear the entire order book. They do *not* encode any kind of "efficient" prediction of future outcomes given all available data. They encode the uninformed opinion of a handful of rich gamblers.
The Kobeissi Letter@KobeissiLetter

BREAKING: Prediction markets now show Kamala Harris with a 4 percentage point higher likelihood of winning the 2024 election over Donald Trump. Just days ago, Donald Trump held a 30 percentage point lead on Harris, according to @Kalshi. Wow.

English
28
51
655
72.4K
Daksh Aggarwal retweetledi
Chen Sun
Chen Sun@jesu9·
.@GillmanLab and the team have done it again! 🎉 Fourier Head brings theory-driven enhancements to LLMs when modeling continuous tokens! The best part? It actually works and is easy to implement!
Nate Gillman@GillmanLab

LLMs are powerful sequence modeling tools! They not only can generate language, but also actions for playing video games, or numerical values for forecasting time series. Can we help LLMs better model these continuous "tokens"? Our answer: Fourier series! Let me explain… 🧵(1/n)

English
0
2
18
3.1K
Daksh Aggarwal retweetledi
Nate Gillman
Nate Gillman@GillmanLab·
LLMs are powerful sequence modeling tools! They not only can generate language, but also actions for playing video games, or numerical values for forecasting time series. Can we help LLMs better model these continuous "tokens"? Our answer: Fourier series! Let me explain… 🧵(1/n)
English
25
210
1.6K
206.7K
Daksh Aggarwal retweetledi
Chen Sun
Chen Sun@jesu9·
Video genmos (e.g. #Sora) are touted as "world simulators". Can we leverage their outputs to create more powerful genmos? Our paper arxiv.org/abs/2402.07087 demonstrates mathematically that a naive solution would lead to model collapse, but a self-correction op would fix that!
Nate Gillman@GillmanLab

Excited to share our latest preprint: “Self-Correcting Self-Consuming Loops for Generative Model Training”. It's a step towards generative AI models that can learn from the universe of data they generate!! 🤖(1/n)

English
1
14
77
14.2K
Daksh Aggarwal retweetledi
Nate Gillman
Nate Gillman@GillmanLab·
Excited to share our latest preprint: “Self-Correcting Self-Consuming Loops for Generative Model Training”. It's a step towards generative AI models that can learn from the universe of data they generate!! 🤖(1/n)
English
4
20
109
34.8K