Brett David Roads

13 posts

Brett David Roads banner
Brett David Roads

Brett David Roads

@BDRoads

he/him, Terran, Postdoctoral Research Associate

Denver, CO Katılım Ocak 2019
24 Takip Edilen90 Takipçiler
Sabitlenmiş Tweet
Brett David Roads
Brett David Roads@BDRoads·
New paper w @ProfData, "Learning as the Unsupervised Alignment of Conceptual Systems" is now publicly available as a view-only version via the following SharedIt link: rdcu.be/b0w0k.
English
0
6
13
0
Brett David Roads retweetledi
George Monbiot
George Monbiot@GeorgeMonbiot·
I tried to explain it in 2014. Despite its many and massive failures since then, it is now more powerful than ever before. theguardian.com/books/2016/apr…
English
42
408
1.2K
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
We (@kaarinaho @BDRoads) offer a new viewpoint on learning. Rather than master (x, y) pairs (e.g., stimulus, category), we propose entire systems are mapped back-and-forth. E.g., from X (e.g., images) to Y (e.g., words) and from Y to X. People do this! 1/3 sciencedirect.com/science/articl…
English
1
9
33
0
Brett David Roads retweetledi
Freddie Bickford Smith
Freddie Bickford Smith@fbickfordsmith·
New paper with @bdroads, @ken_lxl & @profdata. How does top-down attention help in vision? Contrasting with standard accounts that point to stimulus variables like clutter, we find that system variables capturing model-data-task interaction are key. [1/7] arxiv.org/abs/2106.11339
Freddie Bickford Smith tweet media
English
2
11
21
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
The "The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks" is now out in Computational Brain and Behavior. The quote tweet below briefly walks through the preprint. The published version includes an added bonus, link.springer.com/article/10.100… (1/2)
Bradley Love tweet media
Bradley Love@ProfData

People use top-down attention, such as when searching for one's keys. One idea is that prefrontal cortex helps reconfigure our visual system to be more sensitive to features relevant to our goals. This has costs and benefits. 2/n

English
1
7
26
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
New preprint, "A Too-Good-to-be-True Prior to Reduce Shortcut Reliance". If it's too good to be true, it probably is and that holds for deep learning as well. To generalize broadly, models need to learn invariants but instead are fooled by shortcuts. arxiv.org/abs/2102.06406 (1/4)
Bradley Love tweet media
English
4
45
206
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
Our (@BDRoads) TiCS spotlight "Similarity as a Window on the Dimensions of Object Representation" discusses exciting work led by @martin_hebart on inferring semantic representations from human similarity ratings. (1/2) cell.com/trends/cogniti…
Bradley Love tweet media
English
1
26
80
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
Introducing the embedding space you didn't know you needed: Human similarity judgments for the entire ImageNet (50k images) validation set. Perfect for evaluating representations, including unsupervised models. It's already bearing fruit, w @BDRoads arxiv.org/abs/2011.11015 (1/3)
Bradley Love tweet media
English
3
69
213
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
New blog, "A neuroscience-inspired approach to transfer learning" w @ken_lxl @BDRoads. We add goal-directed attention to the middle of a deep convolutional network and find it better adapts to new tasks than retraining the top layer as is standard in ML. bradlove.org/blog/attention
English
1
30
110
0
Brett David Roads retweetledi
Bradley Love
Bradley Love@ProfData·
New paper w @BDRoads, "Learning as the Unsupervised Alignment of Conceptual Systems". Supervised learning tasks can be solved by purely unsupervised means by exploiting correspondences across systems (e.g., text, images, etc.). 1/5 nature.com/articles/s4225…
Bradley Love tweet media
English
4
69
158
0