
Jindong Wang
934 posts

Jindong Wang
@jd92wang
Assistant Professor @williamandmary, Ex Senior Researcher @MSFTResearch. Generative AI, machine learning, large language models, AI for social sciences.











🎙️ New Podcast Episode: Deep Dive: Trustworthy, Multimodal, and Personalized AI Safety In this deep dive episode, Alec Crawford sits down with Dr. Jindog Wang @jd92wang, Assistant Professor at @williamandmary and former Microsoft researcher, to explore what trustworthy AI really means — and why it’s foundational to societal trust in advanced systems. They discuss: 🔐 Trustworthy AI Principles — Privacy, robustness, transparency, and user-centric design as the core pillars of responsible AI. 🧠 Technical Safeguards — Differential privacy, federated learning, and adaptive risk mitigation strategies. 🌐 Multimodal & Multi-Agent Safety — The complex risks emerging from systems that combine text, image, audio, and agentic collaboration. 📊 Benchmarks & Governance — Evolving regulatory frameworks and measurable standards for AI risk management. 👥 Human Oversight & AI Literacy — Why education and human-in-the-loop governance remain essential as AI scales. As AI becomes more deeply woven into society, performance alone isn’t enough. Building systems people can trust requires technical rigor, ethical foresight, and scalable education. If you care about the future of AI safety, governance, and responsible innovation — this episode is essential listening. 🎧 Spotify: open.spotify.com/episode/5PfkCD… 📺 YouTube: youtu.be/hGW0-j7aur8 🌐 Website: aicrisk.com/podcast/episod… #TrustworthyAI #AISafety #ResponsibleAI #AIGovernance #DataScience #ArtificialIntelligence



Excited to share that our project has been selected by the @nvidia @NVIDIAAI Academic Grant Program! We will receive 4 RTX Pro 6000 GPUs to support our research. A huge thanks to our collaborators! #NVIDIAGrant









NEW: @NeurIPSConf, one of the world’s top academic AI conferences, accepted research papers with 100+ AI-hallucinated citations, new report claims Canadian startup @GPTZeroAI analyzed more than 4,000 research papers accepted and presented at NeurIPS 2025 and says it uncovered hundreds of AI-hallucinated citations that slipped past the three or more reviewers assigned to each submission, spanning at least 53 papers in total. The hallucinations had not previously been reported. In some cases, an AI model blended or paraphrased elements from multiple real papers, including believable-sounding titles and author lists, the company says. Others appeared to be fully made up: a nonexistent author, a fabricated paper title, a fake journal or conference, or a URL that leads nowhere. In other cases, the model started from a real paper but made subtle changes—expanding an author’s initials into a guessed first name, dropping or adding coauthors, or paraphrasing the title. Some, however, are plainly wrong—citing “John Smith” and “Jane Doe” as authors, for example. fortune.com/2026/01/21/neu…






