
LASR Labs
12 posts

LASR Labs
@LASRlabs
London AI Safety Research (LASR) Labs is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI.


Details can be found here: lesswrong.com/posts/zXugCu8A… LASR Labs' website (and application) at: lasrlabs.org

Announcing Winter 2026 Cohort applications: due October 10th. LASR Labs AI Safety research program is a 3-month paid opportunity to write a paper in a small team with expert supervision. Past work has been published at workshops/conferences. Previous alumni now at Apollo & UKAISI

DEADLINE ANNOUNCED: Apply by April 26th to LASR Labs AI Safety research program. This is a 3-month paid opportunity to write a paper in a small team with expert supervision. Past work has been published at workshops/conferences. Previous alumni now at Open AI & UKAISI.

DEADLINE ANNOUNCED: Apply by April 26th to LASR Labs AI Safety research program. This is a 3-month paid opportunity to write a paper in a small team with expert supervision. Past work has been published at workshops/conferences. Previous alumni now at Open AI & UKAISI.



1/7 Excited to share our recent project from LASR Labs! We investigated on the utility of SAE latents in language models. #MechanisticInterpretability #SAE Here's what we discovered: 🧠🔍

