Claudio Michaelis

181 posts

Claudio Michaelis

Claudio Michaelis

@clmich

building polybot, a flexible robot for a sustainable agriculture future

Tübingen, Deutschland Katılım Nisan 2010
627 Takip Edilen399 Takipçiler
Claudio Michaelis retweetledi
Wieland Brendel
Wieland Brendel@wielandbr·
Are you into cutting-edge #HPC/#AI clusters? The Tübingen AI Center, one of Europe's top #ML research institutions, searches for a senior expert to lead its #AI cluster (> 1000 GPUs, > 200 users) team and design an upcoming large-scale upgrade: cyber-valley.de/en/jobs/head-o…
Wieland Brendel tweet media
English
1
26
81
0
Claudio Michaelis retweetledi
Mackenzie Weygandt Mathis, PhD
Mackenzie Weygandt Mathis, PhD@TrackingActions·
📣Part 1 of our quest to better understand the brain was @DeepLabCut. 🔥🦓Now Part 2: Introducing #CEBRA to jointly model neural dynamics & behavior with self-supervised learning. Hypothesis- or data-driven, highly consistent, decodable neural latents arxiv.org/abs/2204.00673 🧵👇
English
19
159
607
0
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
@hendrycks @ylecun @imisra_ Amazing, thanks a lot! Even though it's successful, I feel like people still underappreciate BiT's results. That's exactly the first model where we tried cow on beach, and it had no issues with it at all. I actually lost a bet to @__kolesnikov__ on this one 🙃
English
1
0
1
0
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
I forgot to tag @ylecun and @imisra_ time to update your favourite example? Majority of our models since ~2019 (also ResNets) actually haven't been confused by cows on beaches, but we never really took time to write it up.
Lucas Beyer (bl16)@giffmana

Want to turn any vision backbone into an image-text model? Want to show the age-old "your model wouldn't recognize a cow on the beach" is a red herring? That's LiT🔥 (Locked-image Tuning), a new alternative to fine-tuning that combines the best of fine-tuning and zero-shot 1/n🧶

English
2
2
40
0
Ori Press
Ori Press@ori_press·
My friend, who's into reviewing, has a birthday coming up, and I'm not sure what to get him. I'm debating between more experiments, an ablation study, or some theory.
English
2
0
5
0
Claudio Michaelis retweetledi
Bethge Lab
Bethge Lab@bethgelab·
In 2015, ResNets reportedly surpassed human-level performance on #ImageNet. However, a large generalisation gap remained. Which of today’s exciting directions will close the gap: Vision transformers? CLIP? Self-supervised learning? Bigger datasets? Adversarial training? [1/N]
Bethge Lab tweet media
English
3
80
287
0
Claudio Michaelis retweetledi
Kai Kupferschmidt
Kai Kupferschmidt@kakape·
I‘m struck that it still hasn’t sunk in how much #b117 may change the course of this pandemic. The initial shock about it being more transmissible seems to have worn off. But we are barely beginning to see its real-world impact. A story: sciencemag.org/news/2021/02/d… And a thread
English
113
1.5K
2.5K
0
Claudio Michaelis retweetledi
Zemel Group
Zemel Group@zemelgroup·
Check out our new paper Flexible Few-Shot Learning -- the same object can belong to different classes depending on context. We found unsupervised representation is better than supervised. A short version at NeurIPS metalearn workshop today at 10 EST. arxiv.org/abs/2012.05895
Zemel Group tweet mediaZemel Group tweet mediaZemel Group tweet mediaZemel Group tweet media
English
3
8
44
0
Claudio Michaelis retweetledi
Bradley Love
Bradley Love@ProfData·
Introducing the embedding space you didn't know you needed: Human similarity judgments for the entire ImageNet (50k images) validation set. Perfect for evaluating representations, including unsupervised models. It's already bearing fruit, w @BDRoads arxiv.org/abs/2011.11015 (1/3)
Bradley Love tweet media
English
3
69
213
0
Claudio Michaelis retweetledi
Nature Machine Intelligence
Nature Machine Intelligence@NatMachIntell·
A close look at what happens when deep learning fails reveals an effect of ‘shortcut learning’. Recognizing that this may be a common characteristic of learning systems, artificial and biological, may help making deep learning more robust. Read the paper: go.nature.com/3lr0GwU
Nature Machine Intelligence tweet media
English
0
3
20
0
Claudio Michaelis
Claudio Michaelis@clmich·
We hope that our findings will also benefit other areas of machine learning as they suggest that we can achieve human-like generalization capabilities by focusing on wide datasest with diverse categories. [7/7]
English
1
0
4
0
Claudio Michaelis
Claudio Michaelis@clmich·
Results generally look good even in complex or crowded scenes. While some problems like the high false positive rate (right column) remain, we are confident that our insights will make solving these issues easier as well. [6/7]
Claudio Michaelis tweet media
English
1
0
3
0