Abdulkadir Gokce

17 posts

Abdulkadir Gokce

Abdulkadir Gokce

@akgokce0

IC SC @EPFL_en @ICepfl | EE&Math @unibogazici_en

Katılım Eylül 2020
78 Takip Edilen52 Takipçiler
Abdulkadir Gokce retweetledi
Badr AlKhamissi
Badr AlKhamissi@bkhmsi·
🎉 Re-Align is back for its 4th edition at ICLR 2026! 📣 We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields. 📝 Tracks: Short (≤5p), Long (≤10p), Challenge (blog) ⏰ Feb 5, 2026 for papers 🔗 representational-alignment.github.io/2026/
Badr AlKhamissi tweet media
English
1
20
60
27.9K
Abdulkadir Gokce retweetledi
Hannes Mehrer
Hannes Mehrer@HannesMehrer·
🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior. Paper: arxiv.org/abs/2510.03684 🧵
English
1
15
41
12.1K
Abdulkadir Gokce retweetledi
Melika Honarmand
Melika Honarmand@melikahnd_1·
🦾🧠 New Preprint!! What happens if we induce dyslexia in vision–language models? By ablating VWFA-analogous units, we show that models reproduce selective reading impairments similar to human dyslexia. 📄 doi.org/10.48550/arXiv…
English
3
11
53
16.2K
Abdulkadir Gokce retweetledi
Yingtian Tang
Yingtian Tang@yingtian_david·
🧠 NEW PREPRINT Many-Two-One: Diverse Representations Across Visual Pathways Emerge from A Single Objective biorxiv.org/content/10.110…
English
1
27
84
20.3K
Abdulkadir Gokce retweetledi
Badr AlKhamissi
Badr AlKhamissi@bkhmsi·
🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇
Badr AlKhamissi tweet media
English
1
82
383
38.4K
Abdulkadir Gokce retweetledi
Ben Lonnqvist
Ben Lonnqvist@lonnqvistben·
AI vision is insanely good nowadays—but is it really like human vision or something else entirely? In our new pre-print, we pinpoint a fundamental visual mechanism that's trivial for humans yet causes most models to fail spectacularly. Let's dive in👇🧠 [arxiv.org/abs/2504.05253]
Ben Lonnqvist tweet media
English
9
42
333
37.1K
Abdulkadir Gokce retweetledi
Badr AlKhamissi
Badr AlKhamissi@bkhmsi·
🚨 New Preprint!! LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
Badr AlKhamissi tweet media
English
5
63
285
31.4K
Abdulkadir Gokce retweetledi
Badr AlKhamissi
Badr AlKhamissi@bkhmsi·
🚨 New Paper! Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖 Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks! w/ @GretaTuckute, @ABosselut, & @martin_schrimpf 🧵👇
Badr AlKhamissi tweet media
English
1
31
100
21.2K
Abdulkadir Gokce
Abdulkadir Gokce@akgokce0·
6/7 An interesting disconnect: While models can increasingly match human behavior through scaling, they hit a wall when trying to match how the brain processes information. This suggests fundamental limitations in current AI architectures.
Abdulkadir Gokce tweet media
English
2
0
2
193