Rama Vedantam

716 posts

Rama Vedantam

Rama Vedantam

@rama_vedantam

AI Researcher | x-FAIR | https://t.co/GKsSzIxSjf

New York Katılım Aralık 2009
101 Takip Edilen905 Takipçiler
Rama Vedantam
Rama Vedantam@rama_vedantam·
For machines that are supposed to be getting more intelligent than any human alive, eating the whole web for breakfast, chatbot fatigue seems quite starkly --paradoxically -- real. (3/3)
English
0
0
0
128
Rama Vedantam
Rama Vedantam@rama_vedantam·
This is a huge contrast to people. The smartest people I have met get more and more interesting the more you talk to them, peeling layer after layer of insight, subtlety and personality. (2/3)
English
1
0
0
146
Rama Vedantam
Rama Vedantam@rama_vedantam·
I'm pretty sure this has been said before, but IMHO the biggest problem with LLMs is that you can get them to say anything you want.
English
0
0
3
263
Rama Vedantam
Rama Vedantam@rama_vedantam·
Bringing this to AI today… Do #LLMs, #CoT (implicit/explicit), and #AI reasoning approaches actively address these challenges? Or do they, in some ways, mirror these same defects? [6/6] 🤔📈
English
0
0
0
104
Rama Vedantam
Rama Vedantam@rama_vedantam·
🛋️ Couch Philosophy Reason has been seduced by a long history of fruitless and quarrelsome philosophical speculation. Where should AI reasoning draw the line between useful abstraction and endless debate? [5/6]
English
1
0
0
115
Rama Vedantam
Rama Vedantam@rama_vedantam·
Deconstructing the mysteries of class conditional data 📚augmentation in ImageNet models. #NeurIPS2023 Meta-lesson: big data and big models can interact with each other in surprising, & interesting ways! Led by @polkirichenko, collab with @andrewgwils @nyuniversity @AIatMeta
Polina Kirichenko@polkirichenko

Come to our poster at NeurIPS! neurips.cc/virtual/2023/p… W/ amazing co-authors @randall_balestr Mark Ibarhim @D_Bouchacourt @rama_vedantam @mamhamed @andrewgwils And check out Randall’s thread too! x.com/randall_balest… 9/9

English
1
3
7
3.5K
Rama Vedantam
Rama Vedantam@rama_vedantam·
Abstaining correctly is very important for open ended tasks such as VQA. Checkout our work at #CVPR2023 led by the amazing @cdancette @marcus_rohrbach on this important topic. Fantastic collaboration!
Corentin Dancette@cdancette

I am very excited to be at @CVPR in Vancouver this week to present the last work of my PhD, "Improving Selective VQA by Learning From Your Peers". We propose a method and benchmark to improve reliability of VQA models. See you there ! openaccess.thecvf.com/content/CVPR20…

English
0
1
2
917
Danish Pruthi
Danish Pruthi@danish037·
Flying on a one-way ticket in a long time. See you in Bangalore ❤️
Danish Pruthi tweet media
English
9
1
244
20.6K
Rama Vedantam
Rama Vedantam@rama_vedantam·
@BlackHC Yup, key is to do this with the right choice of basis. That is our technical contribution.
English
1
0
1
134
Andreas Kirsch 🇺🇦
Andreas Kirsch 🇺🇦@BlackHC·
@rama_vedantam Thanks for explaining! So you remove information from the embeddings that are confusing to the classifier because features that were never seen during training change the predictions in unintended ways?
English
1
0
0
140
Rama Vedantam
Rama Vedantam@rama_vedantam·
Our paper on "Nullspace Occupancy" as a failure mode for distributional robustness in deep learning just got accepted into #iclr2023 [1/3]. Key Idea 👇
English
2
1
25
5K
Rama Vedantam
Rama Vedantam@rama_vedantam·
@BlackHC I don't think so. Intuitively it's more like IID inputs span a subspace of 2D and OOD inputs lie in the Nullspace of the IID subspace. Hence the name "Nullspace Occupancy"..
English
1
0
1
163
Andreas Kirsch 🇺🇦
Andreas Kirsch 🇺🇦@BlackHC·
@rama_vedantam Is this the same as feature collapse in other works? Different inputs get mapped to the same embeddings causing failures for the model to be able to differentiate them?
English
1
0
0
176