Konstantin Kobs

324 posts

Konstantin Kobs banner
Konstantin Kobs

Konstantin Kobs

@konstantinkobs

Data Scientist

Deutschland Katılım Ocak 2013
562 Takip Edilen98 Takipçiler
Konstantin Kobs retweetledi
jan
jan@janpf95·
All good things come in threes: We will present our last paper at #NAACL2024 "The Roman Empire Strikes Back" at the @SemEvalWorkshop poster session at 14:00. Meet us there! It is a joint work with @konstantinkobs on detecting hallucination detection in text generation models.
jan tweet media
English
0
1
6
404
Konstantin Kobs retweetledi
Jasper Polak
Jasper Polak@polak_jasper·
Dunno who made this but it's genius. Why change fails:
Jasper Polak tweet media
English
55
849
4.7K
493.4K
Konstantin Kobs retweetledi
Happy PhD Supervisor
Happy PhD Supervisor@HaPhDsupervisor·
A PhD student who recently finished her dissertation gave me the book “Big Panda and Tiny Dragon” from James Norbury. This page resonated so much. Does it also apply to your academic journey or would you disagree? #AcademicTwitter #PhDlife #PhDchat
Happy PhD Supervisor tweet media
English
25
309
1.9K
225.2K
Konstantin Kobs retweetledi
BibSonomy
BibSonomy@BibSonomyCrew·
We're pleased to announce the release of our #ChatGPT Plugin🎉 Designed for researchers familiar with BibSonomy.org and those looking to start, this tool grants access to a database of 200M+ scientific publications! (🧵👇🏻1/4)
BibSonomy tweet media
English
1
6
11
969
Sagar Vaze
Sagar Vaze@Sagar_Vaze·
We'll present GeneCIS at #CVPR2023 (Highlight) TL;DR: While most image representations are *fixed*, we present a general way to train and evaluate models which can adapt to different *conditions* on the fly. Code: github.com/facebookresear… Project page: sgvaze.github.io/genecis/ 🧵
AK@_akhaliq

GeneCIS: A Benchmark for General Conditional Image Similarity paper page: huggingface.co/papers/2306.07… argue that there are many notions of 'similarity' and that models, like humans, should be able to adapt to these dynamically. This contrasts with most representation learning methods, supervised or self-supervised, which learn a fixed embedding function and hence implicitly assume a single notion of similarity. For instance, models trained on ImageNet are biased towards object categories, while a user might prefer the model to focus on colors, textures or specific elements in the scene. In this paper, we propose the GeneCIS ('genesis') benchmark, which measures models' ability to adapt to a range of similarity conditions. Extending prior work, our benchmark is designed for zero-shot evaluation only, and hence considers an open-set of similarity conditions. We find that baselines from powerful CLIP models struggle on GeneCIS and that performance on the benchmark is only weakly correlated with ImageNet accuracy, suggesting that simply scaling existing methods is not fruitful. We further propose a simple, scalable solution based on automatically mining information from existing image-caption datasets. We find our method offers a substantial boost over the baselines on GeneCIS, and further improves zero-shot performance on related image retrieval benchmarks. In fact, though evaluated zero-shot, our model surpasses state-of-the-art supervised models on MIT-States.

English
1
15
69
33.7K
Konstantin Kobs
Konstantin Kobs@konstantinkobs·
@opensourcesblog @OpenAI I think this will get solved when it is hooked up to a calculator and a web browser. I think it will be much more powerful then.
English
0
0
0
0
OpenSourcES
OpenSourcES@opensourcesblog·
One of the reasons why I don't like the @OpenAI chatbot is that it's wrong so often but confident about it: I don't mind whether it can solve it or not but it should tell that it can't.
OpenSourcES tweet media
English
1
0
3
0
Konstantin Kobs
Konstantin Kobs@konstantinkobs·
@MushtaqBilalPhD We also developed wheretosubmit.ml to recommend ML conferences and journals based on title and abstract. It also highlights words and phrases that were important for the recommendation.
English
0
0
0
0
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
A question that every academic asks: which journal should I submit my article to? To help academics decide, Taylor & Francis has developed an amazing tool called "Journal Suggester." Here's how to use it 👇
English
78
1.4K
5.5K
0
Konstantin Kobs retweetledi
Data Science - Professor X
Data Science - Professor X@datascience_jmu·
Our paper "InDiReCT: Language-Guided Zero-Shot Deep Metric Learning for Images" has been accepted at WACV 2023 ift.tt/AMiLPrb In this paper by K. Kobs, M. Steininger, and A. Hotho, we use language to guide an image embedding process such that the resulting embedding s…
English
0
2
3
0
Konstantin Kobs retweetledi
Data Science - Professor X
Data Science - Professor X@datascience_jmu·
Our paper "On Background Bias in Deep Metric Learning" has been accepted to ICMV 2022 ift.tt/rdAajkX In this paper, we investigate if Deep Metric Learning models are prone to background bias and test a method to alleviate such bias.
English
0
1
3
0
Daniel Zuegner @danielzuegner.bsky.social
I've successfully defended my PhD thesis this week. I'm incredibly grateful having had the best supervisor I could have asked for with @guennemann. Combined with wonderful collaborators at TUM & elsewhere, and unwavering support of my friends, family, and my partner. Thank you!
English
10
7
98
0
Konstantin Kobs
Konstantin Kobs@konstantinkobs·
I am so excited to announce that our paper "Do Different Deep Metric Learning Losses Lead to Similar Learned Features?" got accepted at @ICCV_2021! 😍 @datascience_jmu
English
1
2
19
0
Konstantin Kobs retweetledi
Khoa Vu
Khoa Vu@KhoaVuUmn·
PhD student: There's a word limit and I don't know how to cut this paper any shorte... Senior coauthor:
English
201
5.4K
33.3K
0
Konstantin Kobs retweetledi
amit
amit@gravicle·
Design for humans vs. Synergy for middle managers
amit tweet media
English
113
1.8K
12.3K
0
Konstantin Kobs
Konstantin Kobs@konstantinkobs·
@annargrs @emnlp2020 We haven't gotten any message even though our paper was accepted for Findings. Were they really sent out? The official @emnlp2020 account and their website does not mention this, either.
English
1
0
0
0