explAInable.NL

251 posts

explAInable.NL

explAInable.NL

@ExplainableNL

Tweeting about Explainable AI in the Netherlands. Account run by @wzuidema (https://t.co/TyjUIfRiyb) and others. https://t.co/BMbmDrKBjB

Tham gia Ekim 2019
220 Đang theo dõi424 Người theo dõi
explAInable.NL
explAInable.NL@ExplainableNL·
Exciting results from StanfordNLP (with D'Oosterlinck from Gent) on Causal Proxies: using symbolic surrogate models for interpreting deep learning, and testing for causality using counterfactual interventions.
Karel@KarelDoostrlnck

🚨Preprint🚨 Interpretable explanations of NLP models are a prerequisite for numerous goals (e.g. safety, trust). We introduce Causal Proxy Models, which provide rich concept-level explanations and can even entirely replace the models they explain. arxiv.org/abs/2209.14279 1/7

English
0
0
0
0
explAInable.NL đã retweet
Marco Virgolin 🇺🇦
Marco Virgolin 🇺🇦@MarcoVirgolin·
Symbolic regression (SR) is the problem of finding an accurate model of the data in the form of a (hopefully elegant) mathematical expression. SR has been thought to be hard and traditionally attempted using evolutionary algorithms. This begs the question: is SR NP-hard? 1/2
GIF
English
3
43
267
0
explAInable.NL đã retweet
Stanford HAI
Stanford HAI@StanfordHAI·
This year's Spring Conference focuses on foundation models, accountable AI, and embodied AI. HAI Associate Director and event co-host @chrmanning explains these key areas and why you should not miss this event: stanford.io/3IxnjdH
English
0
8
17
0
explAInable.NL đã retweet
Fernando P. Santos
Fernando P. Santos@fernandopsantos·
Interested in Explainable AI and Finance? Check out this opportunity for a Tenure Track Assistant Professor position at the Informatics Institute, University of Amsterdam! Deadline extended to 3 April 2022.
English
0
2
6
0
explAInable.NL đã retweet
Marco Virgolin 🇺🇦
Marco Virgolin 🇺🇦@MarcoVirgolin·
And happy that also our work "On genetic programming representations and fitness functions for interpretable dimensionality reduction" made it to @GeccoConf! Preprint: arxiv.org/abs/2203.00528 A short explanation 👇 1/8
Marco Virgolin 🇺🇦@MarcoVirgolin

Happy to share that our work "Evolvability Degeneration in Multi-Objective Genetic Programming for Symbolic Regression" has been accepted at @GeccoConf! 🥳🪅🍾 Preprint: arxiv.org/abs/2202.06983. A high-level🧵 of what's going on here👇 1/8

English
2
2
5
0
explAInable.NL đã retweet
Nils Trost
Nils Trost@TrostNils·
I visualized my last #semantle game with a UMAP of the word embeddings. Here's the result: bp.bleb.li/viewer?p=D5d3y Semantle is a word guessing game by @NovalisDMT where your guesses, unlike in #wordle, are ranked by their similarity in meaning, not spelling, to the secret word.
GIF
English
6
9
51
0
explAInable.NL
explAInable.NL@ExplainableNL·
And from a few months ago, Ehsan (2021) and team, including @mark_riedl, highlight how the AI background of recipients of explanations influences their interpretations. 4/3 mobile.twitter.com/UpolEhsan/stat…
Upol Ehsan@UpolEhsan

🚨 New pre-print alert! 🚨 Excited to share “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations” w/ the amazing team: @samirpassi,@QVeraLiao,Larry Chan,Ethan Lee,@michael_muller,@mark_riedl 🔗arxiv.org/abs/2107.13509 💡Findings at a glance... 1/n

English
0
0
1
0
explAInable.NL
explAInable.NL@ExplainableNL·
Krishna et al. study what practioners *do* in practice with what posthoc interpretability tools produce -- and with the strong disagreements between those tools. *tldr*: XAI is here to stay, but its methods need to be handled with care! 3/3 twitter.com/hima_lakkaraju…
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞@hima_lakkaraju

Excited to share our work "The Disagreement Problem in Explainable Machine Learning" arxiv.org/pdf/2202.01602…. We present new results showing that post hoc explanation methods often disagree in practice & practitioners resolve those disagreements using arbitrary heuristics [1/N]

English
1
0
2
0
explAInable.NL
explAInable.NL@ExplainableNL·
Interesting recent trend in XAI: serious efforts to study the interaction between explainability methods and the *people* who the explanations are addressed to. Jacovi et al. develop a framework to link interpretability tools to things already known about human cognition. 1/3
Alon Jacovi@alon_jacovi

What does it mean for a user to "understand" the explanation of an AI's decision? New paper 🙌 arxiv.org/abs/2201.11239 Diagnosing AI Explanation Methods with Folk Concepts of Behavior w\ @BastingsJasmijn @sebgehr @yoavgo @fajtak Digest:🧵

English
1
4
16
0
explAInable.NL
explAInable.NL@ExplainableNL·
"Transparency and explainability pertain to the technical domain ... leaving the ethics and epistemology of AI largely disconnected. In this talk, Russo will focus on how to remedy this problem and introduce an epistemology for glass box AI that can explicitly incorporate values"
Rineke Verbrugge@RinekeV

Lecture by Federica Russo @federicarusso: Connecting the ethics and epistemology of AI. This Thursday 10 Feb, 12-13 h CET, online. Moderated by Aybüke Özgün. For more information and the way to get access, see: uva.nl/en/shared-cont…

English
0
0
0
0
explAInable.NL
explAInable.NL@ExplainableNL·
A team of researchers from Amsterdam and Rome proposes CF-GNNExplainer: an explainability method for the popular Graph Neural Networks. The method iteratively removes edges from the graph, returning the minimal perturbation that leads to a change in prediction.
Ana Lučić@__alucic

Excited that our paper with @maartjeterhoeve, @gtolomei, @mdr and @fabreetseo on counterfactual explanations for GNNs has been accepted to #AISTATS2022!!! Preprint available here: bit.ly/3If7Hfd 🥳

English
0
2
6
0
explAInable.NL đã retweet
Badr M. Abdullah, PhD 🇾🇪
@gchrupala A paper in ICASSP 2020 proposed probing by "audification" of hidden representations in ASR model. They learn a speech synthesizer on top of the ASR representations. They have a nice video of their work here youtu.be/6gtn7H-pWr8
YouTube video
YouTube
English
1
1
3
0