
🚀 New Survey Alert! 🚀
Excited to share our latest survey on Trustworthy Retrieval-Augmented Generation (RAG) for Large Language Models (LLMs)! 🎉
With the increasing adoption of RAG in AI applications—ranging from medical question answering to legal document analysis—ensuring trustworthiness has become a critical challenge. In this survey, we provide:
✅ A comprehensive roadmap for developing trustworthy RAG systems
✅ A structured discussion on six key pillars: Reliability, Privacy, Safety, Fairness, Explainability, and Accountability
✅ A deep dive into existing challenges, methods, and open problems
✅ A review of evaluation metrics and future research directions
This work is a collaboration with Zheyuan (Frank) Liu, Leyao Wang, Yongjia LEI, Yuying Zhao, Xueqi Cheng, Qingkai Zeng, Xin Luna Dong, Yinglong Xia, Krishnaram Kenthapadi, Ryan A. Rossi, Franck Dernoncourt, Mehrab Tanjim, Nesreen K. Ahmed, Xiaorui Liu, Wenqi Fan, Erik Blasch, Yu Wang, Meng Jiang, and Tyler Derr. Huge thanks to my co-authors and advisors for their insights!
📄 Preprint: lnkd.in/dEaWGFTa
🔍 GitHub Reading List: lnkd.in/dXavPb2i
We hope this survey serves as a valuable resource for the community! Looking forward to feedback and discussions. Let’s push forward towards more trustworthy AI!
English


