Explainable AI

1.2K posts

Explainable AI

Explainable AI

@XAI_Research

Moved to 🦋! Explainable/Interpretable AI researchers and enthusiasts - DM to join the XAI Slack! Twitter and Slack maintained by @NickKroeger1

Katılım Mart 2022
772 Takip Edilen2.2K Takipçiler
Sabitlenmiş Tweet
Explainable AI
Explainable AI@XAI_Research·
There's a new XAI Slack! Connect with XAI/IML researchers and enthusiasts from around the world. Discuss interpretability methods, get help on challenging problems, and meet experts in your field! DM to join 🥳
English
22
12
49
0
Explainable AI retweetledi
Suraj Srinivas
Suraj Srinivas@Suuraj·
Our Theory of Interpretable AI (tverven.github.io/tiai-seminar/) will soon celebrate its one-year anniversary! 🥳 As we step into our second year, we’d love to hear from you! What papers would you like to see discussed in our seminar in the future? 📚🔍 @tverven @ML_Theorist
English
1
4
17
1.8K
Explainable AI retweetledi
Archiki Prasad
Archiki Prasad@ArchikiPrasad·
🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨 which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests. UTGen+UTDebug improve LLM-based code debugging by addressing 3 key questions: 1⃣ What are desirable properties of unit test generators? (A: high output acc and rate of uncovering errors) 2⃣ How good are models at 0-shot unit test generation (A: they are not great) ... so how do we improve LLMs' UT generation abilities? (A: bootstrapping from code-generation data via UTGen) 3⃣ How can we use potentially noisy feedback from generated tests for debugging? (A: via test-time scaling and validation + backtracking in UTDebug) 🧵👇
Archiki Prasad tweet media
English
4
57
169
47.9K
Explainable AI
Explainable AI@XAI_Research·
Reminder we have moved to 🦋 Stay up to date with the latest XAI research!
English
1
1
12
885
Explainable AI retweetledi
Chirag Agarwal
Chirag Agarwal@_cagarwal·
Exciting opportunity at the intersection of climate science and XAI to work on groundbreaking research in attributing extreme precipitation events with multimodal models. Check out the details and help spread the word! #ClimateAI #Postdoc #UVA #Hiring Job description: shorturl.at/uP5fq Tagging @XAI_Research @trustworthy_ml @uvadatascience @ClimateChangeAI to spread the word!
Antonios Mamalakis@AntoniosMamala2

Dear Climate and AI community! We are hiring 😀 a postdoc to join @UVAEnvironment at @UVA and work with @_cagarwal and myself, on using multimodal AI models and explainable AI to attribute extreme precipitation events! Fascinating stuff! Link below. Please RT! jobs.virginia.edu/us/en/job/R006…

English
0
7
17
4.6K
Explainable AI retweetledi
Rudy Gilman
Rudy Gilman@rgilman33·
The later features in DINO-v2 are more abstract and semantically meaningful than I'd expected from the training objectives. This neuron responds only to hugs. Nothing else, just hugs.
English
9
63
556
34.3K
Explainable AI retweetledi
Apart Research
Apart Research@apartresearch·
This week's Apart News brings you an *exclusive* interview with interpretability insider @myra_deng of @GoodfireAI & revisits our Sparse Autoencoders Hackathon which featured a memorable talk from @GoogleDeepMind's @NeelNanda5.
Apart Research tweet media
English
1
4
18
956
Explainable AI retweetledi
Giang Nguyen
Giang Nguyen@giangnguyen2412·
@dylanjsam Hi Dylan, it reminds me of our paper where we also train a model (model 2) on the output of another black-box model (model 1). ultimately we find that combining the outputs of model 2 and model 1 helps improve the perf significantly. openreview.net/forum?id=OcFjq…
English
0
1
3
260
Explainable AI retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
LLMs are all circuits and patterns Nice Paper for a long weekend read - "A Primer on the Inner Workings of Transformer-based Language Models" 📌 Provides a concise intro focusing on the generative decoder-only architecture. 📌 Introduces the Transformer layer components, including the attention block (QK and OV circuits) and feedforward network block, and explains the residual stream perspective. It then categorizes LM interpretability approaches into two dimensions: localizing inputs or model components responsible for a prediction (behavior localization) and decoding information stored in learned representations to understand its usage across network components (information decoding). 📌 For behavior localization, the paper covers input attribution methods (gradient-based, perturbation-based, context mixing) and model component importance techniques (logit attribution, causal interventions, circuits analysis). Causal interventions involve patching activations during the forward pass to estimate component influence, while circuits analysis aims to reverse-engineer neural networks into human-understandable algorithms by uncovering subsets of model components interacting together to solve a task. 📌 Information decoding methods aim to understand what features are represented in the network. Probing trains supervised models to predict input properties from representations, while the linear representation hypothesis states that features are encoded as linear subspaces. Sparse autoencoders (SAEs) can disentangle superimposed features by learning overcomplete feature bases. Decoding in vocabulary space involves projecting intermediate representations and model weights using the unembedding matrix. 📌 Then summarizes discovered inner behaviors in Transformers, including interpretable attention patterns (positional, subword joiner, syntactic heads) and circuits (copying, induction, copy suppression, successor heads), neuron input/output behaviors (concept-specific, language-specific neurons), and the high-level structure mirroring sensory/motor neurons. Emergent multi-component behaviors are exemplified by the IOI task circuit in GPT2-Small. Insights on factuality and hallucinations highlight the competition between grounded and memorized recall mechanisms.
Rohan Paul tweet media
English
4
48
265
16.4K
Explainable AI retweetledi
Michal Moshkovitz
Michal Moshkovitz@ML_Theorist·
This Thursday (in 3 days), @YishayMansour will discuss interpretable approximations — learning with interpretable models. Is it the same as regular learning? Attend the lecture to find out! 💻 Website: tverven.github.io/tiai-seminar/ @Suuraj @tverven
Michal Moshkovitz@ML_Theorist

The theory of interpretable AI seminar is back after the holiday season! 🎅🤶 Our next talk is next Thursday by Yishay Mansour who will talk about interpretable approximations 💻 Website: tverven.github.io/tiai-seminar/ ⏰Date: 16 Jan @Suuraj @tverven @YishayMansour

English
0
3
17
1.2K
Explainable AI retweetledi
Goodfire
Goodfire@GoodfireAI·
We're open-sourcing Sparse Autoencoders (SAEs) for Llama 3.3 70B and Llama 3.1 8B! These are, to the best of our knowledge, the first open-source SAEs for models at this scale and capability level.
Goodfire tweet media
English
11
119
710
113.3K
Explainable AI retweetledi
Samuel Marks
Samuel Marks@saprmarks·
What can AI researchers do *today* that AI developers will find useful for ensuring the safety of future advanced AI systems? To ring in the new year, the Anthropic Alignment Science team is sharing some thoughts on research directions we think are important.
Samuel Marks tweet media
English
9
65
325
107.4K
Explainable AI retweetledi
Ercong Nie
Ercong Nie@NielKlug·
ACL Time @ Bangkok 🇹🇭 Our GNNavi work will be presented in the poster session at 12:30 on Aug. 14 (Wed.). Welcome to drop by and exchange with us! Looking forward to talking with people, especially those who are interested in multilingual & low-resource & LLM interpretability🤗
Ercong Nie tweet mediaErcong Nie tweet media
English
0
7
29
2.5K