Arnavi Chheda-Kothary

129 posts

Arnavi Chheda-Kothary

Arnavi Chheda-Kothary

@arnavic

PhD student @ University of Washington CSE | Previously @ Ai2, MSFT, Columbia, UW

Katılım Mart 2010
174 Takip Edilen209 Takipçiler
Arnavi Chheda-Kothary retweetledi
Ai2
Ai2@allen_ai·
We've released a Chrome extension for Asta—a faster way to go from finding a paper to asking questions about it while you read. 🧵
English
4
8
62
4.6K
Arnavi Chheda-Kothary retweetledi
Barack Obama
Barack Obama@BarackObama·
Congratulations to the Super Bowl champion @Seahawks! This defense was special. MVP Kenneth Walker was dominant. And Sam Darnold gave us one of the best comeback stories in a long time. Enjoy the celebration.
English
7.9K
30.3K
655.3K
67.2M
Jiacheng Liu
Jiacheng Liu@liujc1998·
Belated update: I defended my PhD last month! I am tremendously grateful to my advisors, @HannaHajishirzi and @YejinChoinka. Without their incredible support, I wouldn’t have had so much fun exploring bold ideas, like taking a journey into the ocean of LLM pretraining data. 🥰🥰
Jiacheng Liu tweet mediaJiacheng Liu tweet media
English
39
9
308
20.7K
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
It’s been incredible for me to return to Ai2 as a student researcher on this project with @turingmusician @lucyluwang and @josephcc, getting to combine my background in accessibility and engineering to bring this prototype to life! Please feel free to reach out with feedback.
English
0
0
2
80
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Paper+Figure QA is early and still evolving, so expect some rough edges. We’d love your feedback as we improve it! Paper+Figure QA is only available for open access papers, so there may be some papers we do not currently support. In those cases, please try a different paper.
English
1
0
0
47
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Ever want to ask questions about a paper, including its figures & tables? 📊📈 Want smoother interactions w/papers on desktop & mobile? Try Paper+Figure QA, a new tool from @allen_ai that answers with the original figures, tables, and excerpts from papers: paperfigureqa.allen.ai
Arnavi Chheda-Kothary tweet media
English
1
3
5
658
Arnavi Chheda-Kothary retweetledi
Jonathan Bragg
Jonathan Bragg@turingmusician·
Agent benchmarks don't measure true *AI* advances We built one that's hard & trustworthy 👉AstaBench tests agents w/ *standardized tools* on 2400+ scientific research problems 👉SOTA results across 22 agent *classes* 👉AgentBaselines agents suite 🆕arxiv.org/abs/2510.21652 🧵👇
English
4
20
30
4.1K
Arnavi Chheda-Kothary retweetledi
Venkatesh Potluri
Venkatesh Potluri@venkypotluri·
@umsi we developed accessibility guidelines to evaluate AI coding tools like @Replit , @julesagent and @code . Blind or visually impaired developers, #A11Y experts and accademics, please participate in our study to ensure programming remains accessible! idea11y.dev/VibeCheck/
Venkatesh Potluri tweet media
English
0
8
17
2.7K
Jesse Dodge
Jesse Dodge@JesseDodge·
Personal update: I'm excited to be joining @Meta! I'm deeply grateful for the opportunities I've had at @allen_ai over the past 6 years (including three paper awards in the last two years). Onward to the next chapter! 🥳
English
17
8
383
38.3K
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Thank you to my collaborators, and a huge huge thank you to @BrianHCI who has stuck with me and with this work through several rejection cycles. This work happened as a part of my Master’s thesis working at the CEAL lab, and I’m grateful that we saw it over the finish line! (6/6)
English
0
0
0
89
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Read more about our work here — dl.acm.org/doi/10.1145/37…. For those at CHI, I’ll be presenting on Wednesday during the Spatial Interactions session! Please come find me then or anytime this week to chat further about all things screen readers, accessibility, and AI! (5/6)
English
1
0
0
116
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
While other mediums like mobile devices have spatial affordances built in, desktops continue to feel linear and homogenous when parsing content with screen readers. (3/6)
English
1
0
0
127
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
In this work, we built and deployed a browser extension to enable spatial input and spatial output based screen reader interactions for increased layout understanding of web pages on desktops. (2/6)
English
1
0
0
64
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Kicking off my very first CHI today! Great to be at #CHI2025 in beautiful Yokohama, where I’ll be presenting work on incorporating spatial interactions into desktop screen readers titled: "It Brought Me Joy": Opportunities for Spatial Browsing in Desktop Screen Readers. (1/6)
Arnavi Chheda-Kothary tweet media
English
1
1
13
647
Jiacheng Liu
Jiacheng Liu@liujc1998·
Today we're unveiling OLMoTrace, a tool that enables everyone to understand the outputs of LLMs by connecting to their training data. We do this on unprecedented scale and in real time: finding matching text between model outputs and 4 trillion training tokens within seconds. ✨
Ai2@allen_ai

For years it’s been an open question — how much is a language model learning and synthesizing information, and how much is it just memorizing and reciting? Introducing OLMoTrace, a new feature in the Ai2 Playground that begins to shed some light. 🔦

English
9
45
302
54K
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
In this work, we present a Human-AI system designed to facilitate deeper engagement and conversations between blind or low vision (BLV) family members and their sighted children around child-created visual artwork.
English
1
0
1
82
Arnavi Chheda-Kothary
Arnavi Chheda-Kothary@arnavic·
Excited to be at #IUI25 in Cagliari, Italy this week to present our paper “ArtInsight: Enabling AI-Powered Artwork Engagement for Mixed Visual-Ability Families”. You can find the full paper here: dl.acm.org/doi/full/10.11…
Arnavi Chheda-Kothary tweet media
English
1
1
4
179