Abhinav Kumar

189 posts

Abhinav Kumar

Abhinav Kumar

@abhinav1kumar

3D / 4D Computer Vision: 3D / 4D Generation, Monocular 3D Detection @Adobe

San Jose, CA Katılım Ekim 2010
1.7K Takip Edilen355 Takipçiler
Sabitlenmiş Tweet
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@Arcanous98 As a side note, we could only match Zip-NeRF fidelity combining MCMC GS (4M) with a Qonvolution Neural Network (Tab. 2). It is great to see this method outperforming Zip-NeRF.
English
0
0
1
45
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@Arcanous98 That’s very impressive work! However, I’d offer a different perspective on neural fields modeling high-fidelity data. We experimented with adding neural fields over 3DGS (Tab 4 of arxiv.org/pdf/2512.12898) & found that they sometimes struggle to fit even training data. (1/2)
English
1
0
2
131
Jorge Condor
Jorge Condor@Arcanous98·
Introducing Neural Harmonic Textures: our new method for real-time novel view synthesis that outperforms all 3DGS and NeRF derivatives including (finally) ZipNeRF in terms of quality across all benchmarks. The code is released (Apache 2.0): (research.nvidia.com/labs/sil/proje…) 🧵
English
13
105
606
41.8K
International Conference on 3D Vision
The #3DV2026 Keynote and Award Talk recordings are officially live! 🎥🍿 Revisit all the fantastic presentations from our insightful speakers and keep the 3D vision inspiration going! See the links below⬇️
International Conference on 3D Vision tweet media
English
1
17
118
10.9K
MrNeRF
MrNeRF@janusch_patas·
Towards High-Fidelity Gaussian Splatting with Queried-Convolution Neural Networks Contributions: • We propose QNNs, an architecture that convolves both low-fidelity signals and queries to improve the learning of details. • We theoretically investigate the predictive power of QNNs compared to CNNs (Theorem 1) and the number of Gaussians for increasing fidelity (Corollary 1). • We empirically show that GS-based baseline with a QNN surpasses Zip-NeRF [8] in the 3D NVS task (Sec. 5.1). • Our experiments also show that QNNs enhance performance on 1D regression, 2D regression, and 2D SR tasks (Sec. 5.4).
MrNeRF tweet mediaMrNeRF tweet mediaMrNeRF tweet mediaMrNeRF tweet media
English
1
6
37
3.3K
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@jasveer10 Because money matters. Your offer of ₹28L ($30K) is less. How about 2x the current offer to ₹56L ($60K)? If the candidate is good, it is always good to offer slightly more than what the market offers to him.
English
0
0
1
22
Jasveer Singh
Jasveer Singh@jasveer10·
Interviewed a backend developer. Guy was at 21 LPA. We offered 28 LPA, roughly a 33 percent hike. He agreed and confirmed joining. Yesterday he emailed saying he got a 32 LPA offer elsewhere and now wants 36 LPA from us. Nonsense. Why agree in the first place. If you are still shopping offers just say it upfront. We stopped interviewing other candidates and waited through the notice period for the joining date. Now two days before joining, he came back with a new price tag.
Jasveer Singh tweet media
English
3K
147
3.6K
4.9M
Peyman Milanfar
Peyman Milanfar@docmilanfar·
Dear Colleagues, I have an unusual request: we're looking for your thoughts on where we should submit this work. Please share your (serious!) suggestions in the replies, or by DM. p.s. to stay clear of trouble, we won't reveal publicly where it will be submitted
Peyman Milanfar@docmilanfar

Diffusion models need exact noise level schedules to work. But recently some models have been shown to work without explicit noise conditioning. How? We show that these time-invariant fields implicitly implement a Riemannian gradient flow on some energy landscape. 1/3

English
10
4
70
29.4K
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@iamsashasax @giffmana 4. Classifier-Free Guidance: This technique comes from Bayesian-inspired fusion of likelihood and prior. 5. ELBO in VAEs: The Evidence Lower Bound (ELBO) in VAEs (Diffusion is considered a Hierarchical VAE) is a direct result of Bayesian variational inference. [3/N]
English
0
0
0
30
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@iamsashasax @giffmana 2. Score Function Justification: The score function's role is specifically to bypass the intractable partition function, a Bayesian concern. 3. Langevin Dynamics Origin: Langevin dynamics, used for posterior estimation in diffusion, originates in Bayesian machine learning. [2/N]
English
1
0
0
34
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
rofl the Bayesians are doing it again! (Might still be an interesting paper)
Lucas Beyer (bl16) tweet media
English
18
7
279
37.8K
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@iamsashasax @giffmana Diffusion Models / VAE are a great application of Bayesian framework, with Langevin dynamics as the key tool.
English
1
0
1
67
Sasha Sax
Sasha Sax@iamsashasax·
@giffmana The bayesian framework is SO pretty, but I've never seen it result in something groundbreaking in deep learning It's appealing and "not even wrong", which makes it a honeypot trap for early grad students
English
2
1
7
1.1K
Emma Zang 臧熙璐
Emma Zang 臧熙璐@DrEmmaZang·
Unpopular Academic Advice #3: With the holidays approaching, here’s a piece of academic wisdom that doesn’t get emphasized enough: Build real relationships with your department’s staff. Back in my Duke days (before the era of Interfolio), our amazing staff member Lisa manually submitted recommendation letters for all 80+ of my job applications (I was applying across multiple disciplines… another story for another time!). She never made a single mistake. That kind of behind-the-scenes labor is invisible but absolutely essential. One of my mentors, Professor Kenneth Land, taught me to never take staff for granted. Ken is not only a brilliant scholar but also one of the warmest, most grounded people I’ve met in academia. Every year, he took the staff out for lunch to show appreciation. I’ve followed his lead and now take our staff out for coffee from time to time. It’s a small gesture, but one that genuinely matters. As we close the semester, a gentle reminder: academia runs on people whose names may not appear on our papers, but whose support makes those papers possible. A sincere thank you goes a long way.
English
4
13
181
15.1K
Abhinav Kumar retweetledi
Jitendra MALIK
Jitendra MALIK@JitendraMalikCV·
(5/5)As it is becoming abundantly clear that the current hyper-scaling paradigm for LLMs is not going to lead to AGI / superintelligence, perhaps it is time to return to a “cooperate” framework? Publish research results openly, and build on what each other has published.
English
4
1
105
15.8K
JMB 🧙‍♂️
JMB 🧙‍♂️@jmbollenbacher·
@giffmana @randall_balestr L2 isnt arbitrary though. L2 is the natural choice for linear operators. And when you constrain the embedding space to be near-guassian and you choose activation functions appropriately, NNs are just big stacks of locally linear operators. There's good reasons to pick L2.
English
1
0
4
493
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
Am i the only one to think that the LeJEPA proof that optimal embedding space is isotopic Gaussian is pretty tautological? Had they chosen l1 instead of l2 for knn and lasso instead of OLS for probe, they would have proven that Laplace is the optimal embedding space shape. To me it sounds like proving that if you want to plug something in in Switzerland, using the Swiss plug type is optimal. But I'm neither mathematician nor electrician lol. Don't get me wrong, maybe the method is good, i haven't tried it, but just i don't think this proof says as much and as broadly as people talking about it make it sound.
Lucas Beyer (bl16) tweet media
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)@teortaxesTex

LeCun, LeJEPA, Le Swan Song I don't think this is another joke, he might have proven something very general here, something that will influence our thinking about objectives well outside the JEPA franchise

English
40
24
713
382.5K
Soumith Chintala
Soumith Chintala@soumithchintala·
Leaving Meta and PyTorch I'm stepping down from PyTorch and leaving Meta on November 17th. tl;dr: Didn't want to be doing PyTorch forever, seemed like the perfect time to transition right after I got back from a long leave and the project built itself around me. Eleven years at Meta. Nearly all my professional life. Making many friends for life. Almost eight years leading PyTorch, taking it from nothing to 90%+ adoption in AI. Walking away from this was one of the hardest things I've ever done. But I'm leaving with a full heart. PyTorch handles exascale training now. It powers foundation models that are redefining intelligence. It's in production at virtually every major AI company. It's taught in classrooms from MIT to rural India. The tools I dreamed about making accessible? They are. The barrier to entry I wanted to lower? It's almost gone. To be clear, there’s so much more to do. As long as AI evolves at a breakneck pace, PyTorch will continue to play catch up. Obsessing over the yet-to-come sometimes makes us forget how much we’ve already done. To everyone who built this with me—who believed research should be joyful, that tools should be elegant, that open source changes everything—thank you. This wasn't my journey. It was ours. What's next for me? Something small. Something new. Something I don't fully understand yet. Something uncomfortable. I could have moved to something else inside Meta. But I needed to know what's out there. I needed to do something small again. I couldn't live with the counterfactual regret of never trying something outside Meta. It's very hard to leave. I probably have one of the AI industry’s most leveraged seats, I lead the software layer that powers the entire AI industry. Every major AI company and hardware vendor are on a speed dial. This kind of power is really hard to give up. But curiosity ultimately won out in my head. Keep making AI delicious and accessible. I'll be watching. Probably filing issues. Definitely staying involved. Is PyTorch going to be okay? I don't want to be doing PyTorch forever. I don't want to be like Guido or Linus— bound to a single thing for decades. Last November, coinciding with the birth of my daughter, I started planning my exit with Aparna. My goal was to leave PyTorch in a good and stable place. By this August, during the second half of my parental leave, I knew: Edward, Suo, Alban, Greg, John, Joe and Jana were ready. The team faced hard people, product, technical and organizational problems and didn’t feel the need to lean back on me to solve these for them (unlike in the past). The product story they crafted for the PyTorch Conference was coherent—really coherent. The things I'd flagged red were turning healthy. The project didn't need me anymore. Unlike 2020-2022 (when I stepped down to go do robotics and came back when Lin, Dima and Dwarak left), I have strong confidence that this time PyTorch is truly resilient. The most aligned culture carriers of PyTorch – Greg, Alban, Ed, Jason and Joe are at the decision table now, and people with strong value alignment – Suo, John and Jana have joined them at the table. And there’s a long list of equally value-aligned people willing to sit at the table should any of these people leave. There are many little things that make up my confidence on the people – John worked on Julia and open-source for a very long time (in fact we hacked a Torch.jl in 2015), Suo has been the strongest systems builder and strategic partner I’ve had for the past two years, and Jana worked on resilient core systems for a very long time, I’ve had long technical and organizational discussions with her over the past few months that give me confidence. And the product lineup and execution in 2025 should be sufficient evidence for any remaining doubt. I’m confident that this band of PyTorchers are going to do exceptionally well. PyTorch might change in flavor because I no longer impose my own taste from the top, but I’m confident that the values are going to stay intact and the product is going to be awesome. My time at Meta The early years of FAIR were absolutely magical. I was part of a small family of absolutely brilliant people building state-of-the-art AI out in the open. From working on GANs with Emily Denton, Rob Fergus, Leon Bottou, Martin Arjovsky and the (now legendary) Alec Radford to building Starcraft bots with Gabriel Synnaeve, to building the first FAIR Cluster with Howard Mansell, to working on object detection with Adam Lerer and Piotr Dollar, to building PyTorch. It was more fun than I can describe in words. 2015 and 2016 were probably the most productive and professionally enjoyable years of my life. I’ll probably romanticize this period of my life forever. When I joined FAIR, I had massive impostor syndrome, and the first 3 months were very very difficult. I can’t credit Andrew Tulloch enough for being the most thoughtful, kind and welcoming mentor, without whom I wouldn’t have made it. I’m so damn bullish for Meta just from the fact that he’s back. --- My time on PyTorch was special. I loved every part of building it—designing it, managing it, being the PM, TL, comms lead, doc engineer, release engineer, squashing bugs, growth hacking, turning it into a coherent product with hundreds of people, transitioning it to industry stakeholdership – the whole nine yards. To the core PyTorch team at Meta: the engineers, researchers, open-source maintainers, docs writers, CI infrastructure folks, hardware partners, the community builders. To the hundreds more inside and outside Meta—thank you. You turned a library into a movement. There are too many people to credit and thank, but I can't not mention Adam Paszke, Sam Gross, Greg Chanan, Joe Spisak, Alban Desmaison, Edward Yang, Richard Zou, Tongzhou Wang, Francisco Massa, Luca Antiga, Andreas Köpf, Zach DeVito, Zeming Lin, Adam Lerer, Howard Mansell and Natalia Gimelshein. And Schrep. They made the launch happen. And so many more people became centrally important later: Lu Fang, Xiaodong Wang, Junjie Bai, Nikita Shulga, Horace He, Mark Saroufim, Jason Ansel, Dmytro Dzhulgakov, Yangqing Jia, Geeta Chauhan, Will Constable, Briah Hirsh, Jane Xu, Mario Lezcano, Piotr Balecki, Yinghai Lu, Less Wright, Andrew Tulloch, Bruce Lin, Woo Kim, Helen Suk, Chris Gottbrath, Peng Wu, Joe Isaacson, Eli Uriegas, Tristan Rice, Yanan Cao, Elias Ellison, Animesh Jain, Peter Noordhuis, Tianyu Liu, Yifu Wang, Lin Qiao and hundreds more. It’s criminal of me to not take the space to list out everyone else I should be mentioning here. PyTorch is nothing without its people ❤️. The most joyful moments of building PyTorch was meeting users eager to share their happiness, love and feedback. I remember a grad student coming to me at Neurips 2017, in a slurring emotional voice he said he’d been trying to make progress on his research for 3 years but within 3 months of using PyTorch he made so much progress that he was ready to graduate. That moment made it tangible that what we do matters, a lot, to a lot of people, even if you don't constantly hear from them. I do miss the intimacy of the PyTorch community, with a 300 person conference that felt like an extended family gathering, but I feel that’s a small price to pay considering the scale of impact PyTorch is truly having today – yes the Conference is now 3,000 people where market-moving deals get brokered, but it’s helping orders of magnitude more people to do their best AI work. I miss the intimacy, but I'm proud of that growth. --- To Mark Zuckerberg and Mike Schroepfer, who believed that open-sourcing is fundamentally important and is a sound business strategy. This is so hard to understand for most people within the course of business, but we’ve run lock-step on this strategy without ever having to discuss it. Without you two, neither FAIR nor PyTorch would’ve happened. And those mean so much to me. To Yann LeCun and Rob Fergus, for building the magical early FAIR that I so revere. To Aparna Ramani, a leader that I find so rare at Meta in her ability to hold a really high bar for the org, technically brilliant with the span to discuss deep infra systems and industry-strategy within the same conversation and for being an absolute execution-machine! I’ve learned so much from you. To Santosh, Kaushik, Delia, Oldham and Ben for being so welcoming to Infra. For someone coming over from FAIR with a wildly different culture, you all made me feel at home and made me part of the family, and thank you for that. To all my managers who've championed me through the PSC video game – Serkan, Howard, Jerome, Abhijit, Yoram, Joelle, Aparna and Damien – I owe you a lifetime of drinks. --- Signing off for now. —Soumith
Soumith Chintala tweet media
English
490
569
10.8K
2.5M
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
For people filling Sec. 1.1 (Compulsory) of the form, type the following commands: # CPU info egrep "model name" /proc/cpuinfo | head -1 egrep -c ^processor /proc/cpuinfo # GPU info nvidia-smi # Memory info htop # Check Memory lsblk -d -o name,rota # 0 is SSD / 1 is HDD
#CVPR2026@CVPR

The 🆕 #CVPR2026 “Compute Reporting Form - Author Guidelines” is now available. cvpr.thecvf.com/Conferences/20…

English
0
2
1
432
Abhinav Kumar
Abhinav Kumar@abhinav1kumar·
@CVPR @CVPR Do we have to report the full GPU usage (cumulated across all experiments) or the GPU usage of the main experiment itself?
English
1
0
1
606
#CVPR2026
#CVPR2026@CVPR·
#AI research has an invisible cost: compute Starting with #CVPR2026, authors will report their compute usage. Aggregated data will help the community understand who can participate, what is sustainable, and how resources are used, promoting more transparent & equitable research.
#CVPR2026@CVPR

The 🆕 #CVPR2026 “Compute Reporting Form - Author Guidelines” is now available. cvpr.thecvf.com/Conferences/20…

English
13
26
169
111.1K