Jindong Wang

934 posts

Jindong Wang

Jindong Wang

@jd92wang

Assistant Professor @williamandmary, Ex Senior Researcher @MSFTResearch. Generative AI, machine learning, large language models, AI for social sciences.

United States Katılım Eylül 2012
581 Takip Edilen5.2K Takipçiler
Sabitlenmiş Tweet
Jindong Wang
Jindong Wang@jd92wang·
2025 research review: personalized safety in LLMs, multimodal cultural understanding, knowledge editing vs unlearning, and topology optimization for multi-agent collaboration. Full write-up: jd92wang.notion.site/My-Research-in… Thank you all my supporters. Happy to discuss.
English
0
0
13
1.8K
Jindong Wang
Jindong Wang@jd92wang·
I started PhD in 2014 with the topic of sensor-based human activity recogntion. 🏓Today, introducing *HAROOD* accepted by #KDD2026 @kdd_news : the first comprehensive codebase and benchmark for activity recognition in out-of-distribution (OOD) scenarios. arxiv.org/abs/2512.10807
Jindong Wang tweet media
English
1
3
15
1.9K
Jindong Wang
Jindong Wang@jd92wang·
I had a wonderful visit @UVA @CS_UVA yesterday and talked to so many faculties and friends!
Jindong Wang tweet mediaJindong Wang tweet mediaJindong Wang tweet media
English
0
0
7
1.2K
Jindong Wang
Jindong Wang@jd92wang·
How to adapt your LLMs to be class-specific teachers? Say, how to explain what is AI to a little one from elementary school? 🎯Introducing **Classroom AI**, published by @Nature_NPJ Artificial Intelligence collaborating with @jiooh99, @profjamesevans, Steven Whang: rdcu.be/e6Dli
Jindong Wang tweet media
English
0
2
9
986
Jindong Wang retweetledi
AIRiskInc
AIRiskInc@AIRiskInc·
Over the next 10 years, AI won’t just change how we work. It will influence how we learn, write, and think. In this clip, Dr. Jindong Wang @jd92wang raises an important reflection: As AI-generated content grows, how do we preserve diversity of thought? This isn’t about fear — it’s about intention. The real opportunity ahead? Building AI that strengthens human creativity, not standardizes it. ✨ Thoughtful development. Intentional use. Human-centered design. That’s how we shape the future of work. 🌍 🎥 YouTube: youtu.be/hGW0-j7aur8 🎧 Spotify: open.spotify.com/episode/5PfkCD… 🌐 Website: aicrisk.com/podcast/episod… How do you see AI shaping creativity in the years ahead? #AI #FutureOfWork #Leadership #Innovation #ResponsibleAI
YouTube video
YouTube
English
0
1
1
215
Jindong Wang
Jindong Wang@jd92wang·
A wonderful experience talking with Alec on trustworthy AI and so many interesting things!
AIRiskInc@AIRiskInc

🎙️ New Podcast Episode: Deep Dive: Trustworthy, Multimodal, and Personalized AI Safety In this deep dive episode, Alec Crawford sits down with Dr. Jindog Wang @jd92wang, Assistant Professor at @williamandmary and former Microsoft researcher, to explore what trustworthy AI really means — and why it’s foundational to societal trust in advanced systems. They discuss: 🔐 Trustworthy AI Principles — Privacy, robustness, transparency, and user-centric design as the core pillars of responsible AI. 🧠 Technical Safeguards — Differential privacy, federated learning, and adaptive risk mitigation strategies. 🌐 Multimodal & Multi-Agent Safety — The complex risks emerging from systems that combine text, image, audio, and agentic collaboration. 📊 Benchmarks & Governance — Evolving regulatory frameworks and measurable standards for AI risk management. 👥 Human Oversight & AI Literacy — Why education and human-in-the-loop governance remain essential as AI scales. As AI becomes more deeply woven into society, performance alone isn’t enough. Building systems people can trust requires technical rigor, ethical foresight, and scalable education. If you care about the future of AI safety, governance, and responsible innovation — this episode is essential listening. 🎧 Spotify: open.spotify.com/episode/5PfkCD… 📺 YouTube: youtu.be/hGW0-j7aur8 🌐 Website: aicrisk.com/podcast/episod… #TrustworthyAI #AISafety #ResponsibleAI #AIGovernance #DataScience #ArtificialIntelligence

English
0
0
5
833
Shekswess
Shekswess@Shekswess·
@jd92wang @MaziyarPanahi @NVIDIAAI @nvidia This is awesome !!! I'm happy that federated learning is still big thing (my bachelor thesis was for federated learning). What do you use, what will you experiment *if not secret?
English
1
0
0
25
Jindong Wang retweetledi
DAIR.AI
DAIR.AI@dair_ai·
What if you could get multi-agent performance from a single model? Multi-agent debate systems are powerful. Multiple LLMs can critique each other's reasoning, catch errors, and converge on better answers. However, the cost scales linearly with the number of agents. Five agents means 5x the compute. Twenty agents means 20x and so on. But the intelligence gained from debate doesn't have to stay locked behind a compute wall. This new research introduces AgentArk, a framework that distills the reasoning capabilities of multi-agent debate into a single LLM through trajectory extraction and targeted fine-tuning. This work addresses an important problem: multi-agent systems are effective but expensive at inference time. AgentArk moves that cost to training time, letting a single model carry the reasoning depth of an entire agent team. The key idea: run multi-agent debate offline to generate high-quality reasoning traces, then train a smaller model to internalize those patterns. Five agents debate, one student learns. AgentArk tests three distillation methods. RSFT uses supervised fine-tuning on correct trajectories. DA filters for diverse reasoning paths. PAD, their strongest method, preserves the full structure of multi-agent deliberation, capturing how agents verify intermediate steps and localize errors. The results across 120 experiments: > PAD achieves a 4.8% average gain over single-agent baselines, with in-domain improvements reaching up to 30%. On reasoning quality metrics, > PAD scores highest in intermediate verification (4.07 vs 2.41 baseline) and reasoning coherence (3.96 vs 1.88 baseline). >The distilled models also transfer: trained on math, they improve on TruthfulQA with ROUGE-L jumping from 0.613 to 0.657. Scaling from Qwen3-32B teachers down to Qwen3-0.6B students, the framework holds up. Even sub-billion parameter models absorb meaningful reasoning improvements from multi-agent debate. Paper: arxiv.org/abs/2602.03955 Learn to build effective AI agents in our academy: academy.dair.ai
DAIR.AI tweet media
English
21
25
141
11.6K
Jindong Wang
Jindong Wang@jd92wang·
Thrilled to share that I was selected as one of 35 SPCs from 1728 SPCs to received the @RealAAAI Outstanding Senior Program Committee Award!
Jindong Wang tweet media
English
0
0
21
1.3K
Jindong Wang
Jindong Wang@jd92wang·
Why trusting references generated by AI chatbots over true references RAGed by @Google scholar?
Jindong Wang tweet media
Sharon Goldman@sharongoldman

NEW: @NeurIPSConf, one of the world’s top academic AI conferences, accepted research papers with 100+ AI-hallucinated citations, new report claims Canadian startup @GPTZeroAI analyzed more than 4,000 research papers accepted and presented at NeurIPS 2025 and says it uncovered hundreds of AI-hallucinated citations that slipped past the three or more reviewers assigned to each submission, spanning at least 53 papers in total. The hallucinations had not previously been reported. In some cases, an AI model blended or paraphrased elements from multiple real papers, including believable-sounding titles and author lists, the company says. Others appeared to be fully made up: a nonexistent author, a fabricated paper title, a fake journal or conference, or a URL that leads nowhere. In other cases, the model started from a real paper but made subtle changes—expanding an author’s initials into a guessed first name, dropping or adding coauthors, or paraphrasing the title. Some, however, are plainly wrong—citing “John Smith” and “Jane Doe” as authors, for example. fortune.com/2026/01/21/neu…

English
0
1
11
3.5K