Federico Cabitza

4.7K posts

Federico Cabitza banner
Federico Cabitza

Federico Cabitza

@cabitzaf

An interactionist studying Human-AI teaming in decision making; Assoc. Professor of Human-AI Interaction and DSS @ University of Milano-Bicocca, Italy.

Milan, Lombardy Katılım Haziran 2017
81 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Federico Cabitza
Federico Cabitza@cabitzaf·
@vdignum @EU_Commission However, I'd rather argue for an AI-decentered humanity... #comments-6392401738753728512&trk=prof-post" target="_blank" rel="nofollow noopener">linkedin.com/pulse/debunkin…
English
1
4
24
0
Federico Cabitza
Federico Cabitza@cabitzaf·
What would you do with ~800k de-identified newborn screening records + 100 biomarkers (nearly a decade, Lombardy 🇮🇹)? New open-access Scientific Data descriptor + dataset + code. Any idea for early detection & data curation? Paper/dataset: nature.com/articles/s4159…
Federico Cabitza tweet media
English
0
0
1
80
Federico Cabitza
Federico Cabitza@cabitzaf·
What is the effect of pre-LLM AI on hard clinical endpoints? In 47 RCTs, CDSS in EHRs show little impact on mortality and modest benefits on morbidity. Will LLMs do better? We will see. doi.org/10.1016/j.ijme… Thanks to IRCCS Galeazzi, @unimib and all co-authors.
Federico Cabitza tweet media
English
0
0
1
94
Federico Cabitza
Federico Cabitza@cabitzaf·
@David_Gunkel There are some essential readings that anyone working in AI should know. This vignette is incredibly timely.
Federico Cabitza tweet media
English
0
0
1
25
Allen Schmaltz
Allen Schmaltz@Allen_Schmaltz·
I would argue that interpretability research has been key to AI Safety/Security research to date. Hard-attention via model decompositions (which can be viewed as a type of mechanistic approach) led to the discovery that neural networks are hidden instance-based metric learners (i.e., the predictions can be meaningfully approximated as a weighting over the representation space of the training set). (Hard-attention can be biased with task-specific inductive biases, which is useful for semi-supervised learning and feature detection, but that the local-level predictions can change while the document-level predictions stay the same reflects the non-identifiability of the parameters, and that an additional mechanism is needed for making sense of the predictions at test-time.) This insight that neural networks can be viewed as hidden instance-based metric learners then led to methods to calibrate the predictive uncertainty. Why is that critical? Because it provides a robust approach for correctness prediction, evaluation, exogenous tool-calling, test-time search, etc....and more generally, constraining the neural network model to perform as it did during held-out evaluation prior to deployment. But I agree with the general view that wider collaboration across the interpretability, uncertainty quantification, and AI Safety/Security communities would be productive.
Christopher Potts@ChrisGPotts

Interpretability research has made only minor contributions to AI safety so far. What can we do to change that? (Clip from a longer talk; YouTube link in the thread):

English
3
2
47
10.2K
Federico Cabitza
Federico Cabitza@cabitzaf·
@elonmusk This contrasts the second law of Cipolla about human stupidity: "The probability that a particular person is stupid is independent of any other characteristic of the person himself".
English
1
0
0
18
Manuel Gomez-Rodriguez
Manuel Gomez-Rodriguez@autreche·
@emollick This may be because predictions are miscalibrated/non monotone when conditioning on the radiologist own assessment, making it difficult for radiologist to incorporate ai predictions? It can be fixed by human-aligned calibration here: arxiv.org/abs/2306.00074
English
2
1
10
589
Ethan Mollick
Ethan Mollick@emollick·
In this study, AI was more accurate than two thirds of radiologists, yet when radiologists had AI help their diagnoses did not improve. Why? Humans ignored the AI’s advice when it conflicted with their views. A big barrier to future human-AI collaboration blueprintcdn.com/wp-content/upl…
Ethan Mollick tweet mediaEthan Mollick tweet media
English
139
670
3.5K
814.1K
Federico Cabitza retweetledi
Tv2000.it
Tv2000.it@TV2000it·
🔵Su #Play2000 "Rallentare | Festival dei ritmi sostenibili" 📲Sabato #4ottobre dalle ore 11 su Play2000.it 📌Una giornata di incontri, dibattiti e laboratori dedicata alla riflessione sui ritmi di vita contemporanei e alla promozione della #sostenibilità ambientale, sociale e umana Tutte le info 👇 tv2000.it/ufficiostampa/… @giuriggio @UcsiSocial @caritas_milano @Avvenire_Nei @Unicatt @paolo_bovio @cabitzaf @Acli_nazionali
Tv2000.it tweet media
Italiano
0
5
9
486
Anja Steil
Anja Steil@GundelGedanken·
This is precisely my biological-educational angle: We cannot wait for an ‘ontological proof.’ Ethical responsibility arises long before we ‘know for sure’, which may never come. Gunkel’s relational approach opens the door to entirely new ways forward.
David J. Gunkel@David_Gunkel

I like this line from @eschwitz's recent blogpost: "We will think we have solved the problem of AI consciousness, even if we have not." This is why I have argued that consciousness is a rather poor metaphysical scaffold for moral and legal status. schwitzsplinters.blogspot.com/2025/09/the-so…

English
2
2
10
1K
Federico Cabitza retweetledi
Quercia
Quercia@danielequercia·
#ResponsibleAI Talk Designing AI for Hybrid Decision-Making or... Why Assessing Appropriate Reliance Matters Speaker: @cabitzaf (Bicocca) Date: September 22, 2025 Time: 3:30pm (London) To join online, send empty email, subject “Subscribe RAI” to daniele.quercia ;)
Quercia tweet media
English
0
1
2
298
Federico Cabitza retweetledi
Leslye Dias
Leslye Dias@LeslyeDias·
🚨 Excited to share our article in Artificial Intelligence Review with Chiara Natali, Luca Marconi & @cabitzaf : “AI-induced Deskilling in Medicine: A Mixed-Method Review and Research Agenda for Healthcare and Beyond” 🔗 doi.org/10.1007/s10462… #AIEthics #AI #Healthcare
English
1
1
3
243
Federico Cabitza
Federico Cabitza@cabitzaf·
@David_Gunkel @mitpress This is just the beginning of the trajectory. As I see it, pedestrian status gives robots no real individual rights, mainly obligations for robot suppliers. But yes, cars now must stop for them at crosswalks, otherwise liability shifts to the driver. New suppliers' rights, then.
English
1
0
2
37
David J. Gunkel
David J. Gunkel@David_Gunkel·
Since the publication of the 2018 book "Robot Rights" @mitpress this is precisely what I have argued would be the actually trajectory that would lead to robot rights. AI welfare is just getting off the ground, but robot right is already a legal reality. intimedia.id/read/south-kor…
English
6
6
15
1.7K
Eugenio Santoro
Eugenio Santoro@eugeniosantoro·
I medici di medicina generale e i pediatri di libera scelta si prenderanno davvero carico di scrivere il profilo sanitario sintetico dei proprio pazienti? Saranno periodicamente aggiornati? E i medici dei pronto soccorso si fideranno di queste info? doctor33.it/articolo/65303…
Italiano
2
1
1
95