Siddhartha Gairola

2.5K posts

Siddhartha Gairola banner
Siddhartha Gairola

Siddhartha Gairola

@sidgairo18

🏔️📍🇩🇪 @ELLISforEurope 🇪🇺 PhD Student @cvml_mpiinf at MPI-INF & IST-A 物の哀れ ✨

Saarbrücken, Germany Katılım Temmuz 2017
659 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
🚀New preprint: DAVE — Distribution-aware Attribution via ViT Gradient DEcomposition. 1/11 🔍 What’s new: We fix a persistent issue in ViT explainability: unstable, artifact-heavy pixel attributions. DAVE yields fine-grained pixel-level maps without patch-grid saliency.
Siddhartha Gairola tweet media
English
4
23
136
16.7K
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
This seems confusing to me, the Author Response Deadline is March 30th, and then the Author Reviewer Discussion Ends April 7th ? What is the difference between the two ? Are the authors still allowed to comment / respond to the Reviewers during the period between March 31st to April 7th ? @icmlconf
Siddhartha Gairola tweet media
English
0
0
2
95
Siddhartha Gairola retweetledi
Philippe-Antoine Hoyeck
My favourite academia joke of all time. "The moral of this story is: The topic of your dissertation isn't really important; what's important is who your advisor is." —Harvey Cox, When Jesus Came to Harvard
English
9
244
1.5K
125.9K
Siddhartha Gairola retweetledi
Brady Long
Brady Long@thisguyknowsai·
🚨 BREAKING: Meta researchers showed a model 2 million hours of video. No labels. No physics textbook. No supervision at all. It learned gravity. Object permanence. Inertia. And it just beat Gemini 1.5 Pro and GPT-4 level models at physics understanding. Here's what just happened:
Brady Long tweet media
English
39
141
801
85.3K
Siddhartha Gairola retweetledi
Grant Sanderson
Grant Sanderson@3blue1brown·
This video was a complete joy to make. Here's a short preview, but next time you're looking to sit down for 45 minutes of math and art, take a look at the full version on YouTube.
English
34
178
2.1K
92.2K
Siddhartha Gairola retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
How to setup your Claude code project? TL;DR Most developers skip the setup and just start prompting. That's the mistake. A proper Claude Code project lives inside a .𝗰𝗹𝗮𝘂𝗱𝗲/ folder. Start with 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 as Claude's instruction manual. Split it into a 𝗿𝘂𝗹𝗲𝘀/ folder as it grows. Add 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀/ for repeatable workflows, 𝘀𝗸𝗶𝗹𝗹𝘀/ for context-triggered automation, and 𝗮𝗴𝗲𝗻𝘁𝘀/ for isolated subagents. Lock down permissions in 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀.𝗷𝘀𝗼𝗻. There are two .𝗰𝗹𝗮𝘂𝗱𝗲/ folders: one committed with your repo, one global at ~/.𝗰𝗹𝗮𝘂𝗱𝗲/ for personal preferences and auto-memory across projects. The .𝗰𝗹𝗮𝘂𝗱𝗲/ folder is infrastructure. Treat it like one. The article below is a complete guide to 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱, custom commands, skills, agents, and permissions, and how to set them up properly.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2034…

English
177
1.5K
11.9K
1.8M
Siddhartha Gairola retweetledi
Jeremy Nguyen ✍🏼 🚢
Jeremy Nguyen ✍🏼 🚢@JeremyNguyenPhD·
Claude Code for Academics "A gentle introduction in how to use Claude Code for Academics." presentation slides and github repo from Alessandro Spina link in reply
Jeremy Nguyen ✍🏼 🚢 tweet media
English
20
256
1.7K
154.1K
Siddhartha Gairola retweetledi
Natasha Jaques
Natasha Jaques@natashajaques·
The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
44
388
1.4K
244K
Songyou Peng
Songyou Peng@songyoupeng·
Huge thanks for the acknowledgment! It is really an honor to get honorable mentioned twice at @3DVconf, first a best paper honorable mention 2 years ago :) Also big congrats to my dear colleague and friend @Mi_Niemeyer for the award, so well deserved!
International Conference on 3D Vision@3DVconf

3DV Outstanding Doctoral Dissertation Award Honorable Mention goes to Songyou Peng! @songyoupeng Thesis title: "Neural Scene Representations for 3D Reconstruction and Scene Understanding" #3DV2026

English
18
4
141
9.3K
Siddhartha Gairola retweetledi
Athenaeum Book Club
Athenaeum Book Club@athenaeumbc·
C. S. Lewis’s advice to a young schoolgirl on how to become a better writer:
Athenaeum Book Club tweet media
English
97
3.7K
19.2K
447.1K
Siddhartha Gairola retweetledi
Daily Stoic
Daily Stoic@dailystoic·
One of the most powerful passages in Marcus Aurelius' "Meditations":
Daily Stoic tweet media
English
13
72
571
22.6K
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
Yeah I would assume (hope) that is the case and the ~similar acceptance rate across venues is a reasonable indicator for that. However, being a reviewer I find this quite unsettling and also as an author I don't really know what the scores mean (sometimes - especially the borderline cases).
English
0
0
0
20
Damien Teney
Damien Teney@DamienTeney·
@sidgairo18 Aren't the absolute scores irrelevant? What matters is the global relative ranking of papers (which we get because every reviewer/AC sees many papers). And the calibration across conferences happens by enforcing (implicitly or explicity) a ~similar acceptance rate (?).🤔
English
1
0
1
39
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
Food for thought - 🤔 I've been thinking about this long and hard - having been reviewing for popular ML / CV conferences (ICML, ICLR, NeurIPS, CVPR, ICCV, ECCV) - with the community submitting papers across these, it only makes sense to have a uniform reviewer form, guidelines, rules and format across these conferences. Personally I have a real hard time calibrating my scale from 1-10 (ICLR) to 1-6 for CVPR, then we comes ICML which also has 1-6 but 3,4 are weak reject/accept instead of 3,4 as borderline reject/accept (for CVPR). This only gets trickier and worse when you add ICCV, ECCV, NeurIPS into the mix. Then, you add NLP related conferences and Robotics ones, to make the entire system more and more confusing - with uncalibrated reviewer scores coming - which may or may not truly reflect the reviewer's intentions. Happy to hear the thoughts of others. cc: @icmlconf @CVPR @NeurIPSConf @iclr_conf @ICCVConference @eccvconf
Siddhartha Gairola tweet media
English
3
0
8
1.3K
Anirbit
Anirbit@anirbit_maths·
@sidgairo18 This is exactly the mess that TMLR solved. There are no scores in round-1 of reviews. There is only yes/no decision after rebuttals. There are enough reasons why every system needs to converge to this.
English
1
0
1
32
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
Platonic Solids illustration from Mysterium Cosmographicum by Johannes Kepler How did they make such beautifully articulate figures way back in ~1596/1597 ?
Siddhartha Gairola tweet media
English
1
0
0
90
Siddhartha Gairola
Siddhartha Gairola@sidgairo18·
@shashankska Sure, but I was referring more to the reviewer form and scores - to be uniform / calibrated across venues.
English
2
0
0
53