Fengyu Cai

11 posts

Fengyu Cai banner
Fengyu Cai

Fengyu Cai

@trumancfy

Ph.D. student @ SOS&UKP TU Darmstadt

Darmstadt, Germany Katılım Ekim 2019
187 Takip Edilen34 Takipçiler
Fengyu Cai retweetledi
Jeremy Nguyen ✍🏼 🚢
Jeremy Nguyen ✍🏼 🚢@JeremyNguyenPhD·
Claude Code for Academics "A gentle introduction in how to use Claude Code for Academics." presentation slides and github repo from Alessandro Spina link in reply
Jeremy Nguyen ✍🏼 🚢 tweet media
English
20
263
1.7K
160.1K
Fengyu Cai retweetledi
Sharon Li
Sharon Li@SharonYixuanLi·
Almost there...
Sharon Li tweet media
English
4
8
254
38.8K
Fengyu Cai retweetledi
Tong Chen
Tong Chen@tomchen0·
OpenAI's blog (openai.com/index/why-lang…) points out that today’s language models hallucinate because training and evaluation reward guessing instead of admitting uncertainty. This raises a natural question: can we reduce hallucination without hurting utility?🤔 On-policy RL with our Binary Retrieval-Augmented Reward (RAR) can improve factuality (40% reduction in hallucination) while preserving model utility (win rate and accuracy) of fully trained, capable LMs like Qwen3-8B. [1/n]
Tong Chen tweet media
English
27
123
669
111.1K
Fengyu Cai retweetledi
Xinran Zhao
Xinran Zhao@xinranz3·
(1/7) 🧐Can we dynamically select and integrate the best retrievers for each query? We introduce ✨MoR✨: a zero-shot way to handle diverse queries with a weighted combination of heterogeneous retrievers – even including human information sources! We will present this paper at #EMNLP2025 today! 🗓️Welcome to our poster session at Hall C, Wednesday, Nov 5, 16:30 – 18:00 👋 We have more details in our paper! 📄Paper: arxiv.org/pdf/2506.15862 🧑‍💻Code: github.com/Josh1108/Mixtu… Please come to our poster session or reach out if you want to chat more! 🧵
Xinran Zhao tweet media
English
1
10
38
10.5K
Fengyu Cai retweetledi
UKP Lab
UKP Lab@UKPLab·
🤖💡 LLMs are great at reasoning! But how confident are they in their answers? Our latest survey dives into the world of LLM confidence estimation and calibration. 📰 arxiv.org/abs/2311.08298 Learn more about our #NAACL2024 paper in this thread 🧵! (1/9) #NLProc #Survey
UKP Lab tweet media
English
2
7
33
3.2K
Fengyu Cai retweetledi
Fengyu Cai retweetledi
Shikhar
Shikhar@ShikharMurty·
Really interesting presentation by Yejin Choi, on using language as a substrate for producing intuitive commonsense inferences, and treating this kind of reasoning as a generative problem. @StanfordHAI #NeuroHAI
Shikhar tweet media
English
1
14
36
0
Fengyu Cai retweetledi
Microsoft Research
Microsoft Research@MSFTResearch·
Neural network models have made great advances recently on a wide range of tasks with state-of-the-art results, but these models also require large amounts of annotated data to be trained. Learn about new approaches to address the scarcity of labeled data: aka.ms/AA9sfgj
English
0
20
103
0
Fengyu Cai retweetledi
Sebastian Weichwald
Sebastian Weichwald@sweichwald·
Slides up #eth2020" target="_blank" rel="nofollow noopener">sweichwald.de/slides.html#et… This was a super fun experience for me coming back to @ETH_en – great to see students intrigued and asking questions much smarter than my talk 🙃 Thanks Stefan Bauer & @bschoelkopf for inviting me – causal representation learning 🔥
Sebastian Weichwald tweet media
English
3
4
33
0