Sima Noorani

36 posts

Sima Noorani

Sima Noorani

@NooraniSimaa

PhD candidate @Penn

Philadelphia, PA เข้าร่วม Mart 2024
185 กำลังติดตาม129 ผู้ติดตาม
ทวีตที่ปักหมุด
Sima Noorani
Sima Noorani@NooraniSimaa·
When humans and AI collaborate, what should uncertainty quantification look like? Our new paper proposes two principles---no counterfactual harm and complementarity---and gives distribution-free guarantees without assumptions on the task, AI model, or human behavior.
Sima Noorani tweet media
English
4
10
99
9.5K
Sima Noorani รีทวีตแล้ว
Shayan Kiyani
Shayan Kiyani@ShayanKiyani1·
If reliable verification is expensive, then intelligence is partly the art of spending it wisely. Reasoning with LLMs increasingly unfolds through weak (fast but noisy) and strong (reliable but costly) verification. The question is: how should we orchestrate the two?
Shayan Kiyani tweet media
English
1
6
22
1.2K
Sima Noorani
Sima Noorani@NooraniSimaa·
Empirically, we study the framework in both LLM-simulated interactions and a real human crowdsourcing study, and show that enforcing these two principles leads to predictable shifts in downstream human decision quality. This validates that these human-centric principles serve as practical levers for steering multi-round HAI collaboration dynamics.
Sima Noorani tweet media
English
1
0
0
146
Sima Noorani
Sima Noorani@NooraniSimaa·
In real-world multi-round human–AI interactions, the human decides whether to engage, how long to continue, and how much to trust the AI, while bearing accountability for the final outcome. How should these interactions be designed to incentivize human engagement in a principled way?
English
1
2
14
445
Sima Noorani
Sima Noorani@NooraniSimaa·
Empirically, the resulting collaborative sets improve over both human-only and AI-only sets in both marginal coverage and average set size.
Sima Noorani tweet media
English
1
0
2
599
Sima Noorani
Sima Noorani@NooraniSimaa·
When humans and AI collaborate, what should uncertainty quantification look like? Our new paper proposes two principles---no counterfactual harm and complementarity---and gives distribution-free guarantees without assumptions on the task, AI model, or human behavior.
Sima Noorani tweet media
English
4
10
99
9.5K
Sima Noorani รีทวีตแล้ว
Aaron Roth
Aaron Roth@Aaroth·
How should you use forecasts f:X->R^d to make decisions? It depends what properties they have. If they are fully calibrated (E[y | f(x) = p] = p), then you should be maximally agressive and act as if they are correct --- i.e. play argmax_a E_{o ~ f(x)}[u(a,o)]. On the other hand
Aaron Roth tweet media
English
1
14
98
8.8K
Sima Noorani รีทวีตแล้ว
Shayan Kiyani
Shayan Kiyani@ShayanKiyani1·
We push conformal prediction and its trade-offs beyond regression & classification — into query-based generative models. Surprisingly (or not?), missing mass & Good-Turing estimators emerge as key tools once again. Very excited about this one!
Sima Noorani@NooraniSimaa

How can we quantify uncertainty in LLMs from only a few sampled outputs? The key lies in the classical problem of missing mass—the probability of unseen outputs. This perspective offers a principled foundation for conformal prediction in query-only settings like LLMs.

English
0
4
24
2.8K