




Haon Park
53 posts

@redteamhacker
cofounder @AIM Intelligence AI Safety & Security redteam hacker











I am happy to launch SAAR(Safety & Alignment Research, India) as a space for Indian researchers and engineers to collaborate on serious technical research work in AI safety and mechanistic interpretability. The reality is in spaces like SPAR, MATS, and other alignment-focused initiatives, Indian participation in technical research remains minimal. We have talented people doing this work, but they’re scattered. We need a proper research community. I hope to solve this with SAAR. Whether you’re exploring interpretability, working on red-teaming, thinking through alignment problems, SAAR is where we can collaborate and connect. We can start with weekly paper reading and hopefully work together to get something substantial for upcoming 2026 AI Safety Workshops. If you’re based in India, actively working in AI safety or interpretability, and want to be part of building something serious here, join the discord link in comments .





















