Geewook Kim

48 posts

Geewook Kim

Geewook Kim

@GeewookKim

Applied research scientist at NAVER Cloud AI / Ph.D. student at KAIST AI / Previously at Kyoto University / Homepage: https://t.co/9Ncn1jKHZi

Katılım Aralık 2021
153 Takip Edilen239 Takipçiler
Sabitlenmiş Tweet
Geewook Kim
Geewook Kim@GeewookKim·
Donut🍩(OCR-free Document Understanding Transformer #ECCV2022 @eccvconf) is now available @huggingface🤗 Check it out at huggingface.co/docs/transform… with @Gradio demos from @NielsRogge Classification:huggingface.co/spaces/nielsr/… Parsing:huggingface.co/spaces/nielsr/… VQA:huggingface.co/spaces/nielsr/…
Niels Rogge@NielsRogge

#LayoutLM gets a strong competitor: Donut 🍩, now available @huggingface! The model uses Swin as encoder, BART as decoder to autoregressively generate classes/parses/answers related to documents! 🔥 No OCR required, MIT licensed, end-to-end. Attention is all you need. (1/2)

English
2
33
139
0
Geewook Kim
Geewook Kim@GeewookKim·
Presenting MambaMia at AAAI 2026 today (Oral, 11AM, Garnet 214) Working on VLMs, hour-long videos produce too many tokens for practical deployment. This led me to explore hierarchical compression with state-space models. Glad to share with the community! github.com/naver-ai/mamba…
English
0
1
3
165
Geewook Kim retweetledi
hyunji amy lee
hyunji amy lee@hyunji_amy_lee·
🚨 Want models to better utilize and ground on the provided knowledge? We introduce Context-INformed Grounding Supervision (CINGS)! Training LLM with CINGS significantly boosts grounding abilities in both text and vision-language models compared to standard instruction tuning.
hyunji amy lee tweet media
English
2
38
123
15.6K
Geewook Kim
Geewook Kim@GeewookKim·
Presenting our poster at #ICLR2025 today (Fri, Apr 25, 15:00) — Hall 3 + Hall 2B #264! We explored safety issues when extending LLMs to vision and how to address them. Come by and let’s chat—always happy to discuss ideas! 🤗
Geewook Kim tweet media
English
0
3
10
728
Niels Rogge
Niels Rogge@NielsRogge·
Thank you all of the 10k Github stars!! This project started just as a repo to share Jupyter notebooks on how to use and fine-tune models of the @huggingface Transformers library. Now it contains 150+ notebooks for more than 70 models! github.com/NielsRogge/Tra…
English
7
95
616
27.8K
Geewook Kim
Geewook Kim@GeewookKim·
I'm delighted to share that our latest research endeavors have been accepted! 1. At #NAACL2025, we'll present "Evaluating Multimodal Generative AI with Korean Educational Standards," marking a step forward in aligning AI with rigorous Korean educational tests. 2. For #ICLR2025, our paper "How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?" provides insights into ensuring LVLM safety without compromising their helpfulness. Read more here: arxiv.org/abs/2410.07571 Excited to contribute to the field and share our work with the community! ❤️ Both projects will be publicly open-source.
English
1
1
28
2.5K
Geewook Kim retweetledi
Seongyun Lee
Seongyun Lee@sylee_ai·
🎉 Excited to share that our paper "How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?" has been accepted to #ICLR2025! 🖼 Vision-Language Adaptation empowers LLMs to process visual information—but how does it impact their safety? 🛡 And what about safety tuning? How does it influence the model's helpfulness? ✨ We provide insights into these questions through extensive experiments and propose a simple training-free method that maintains both safety and helpfulness of LVLMs.
Seongyun Lee tweet media
English
1
15
61
6.5K
Geewook Kim
Geewook Kim@GeewookKim·
I’m pleased to share my recent work at #EMNLP2024 today! Join me at In-Person Poster Session G (Jasmine) on 14 Nov 2024, from 2:00 PM! I am also happy to share that our project is now open-source🤗: github.com/naver-ai/elva @emnlpmeeting #emnlp
Geewook Kim tweet media
Geewook Kim@GeewookKim

Happy to share that our new work on designing Efficient LVLMs for Reading and Reasoning has been accepted at #EMNLP2024 Main Conference! arxiv.org/abs/2406.11823 We've studied efficient designs to reduce the resource costs in current VLMs. So happy to contribute to the field! ❤️

English
1
1
24
2.3K
Geewook Kim
Geewook Kim@GeewookKim·
Happy to share that our new work on designing Efficient LVLMs for Reading and Reasoning has been accepted at #EMNLP2024 Main Conference! arxiv.org/abs/2406.11823 We've studied efficient designs to reduce the resource costs in current VLMs. So happy to contribute to the field! ❤️
English
3
10
110
11.4K
Geewook Kim
Geewook Kim@GeewookKim·
September 2024: My citations have reached 1,000 on Google Scholar 🎉 This milestone reminds me of all the collective efforts and small steps taken over time. I’m deeply grateful to my colleagues and mentors for their support and guidance along the way 🥰 scholar.google.com/citations?user…
English
4
0
19
1.2K
Geewook Kim retweetledi
Kyunghyun Cho
Kyunghyun Cho@kchonyc·
enjoying #ICML2024 ? already finished with llama-3.1 tech report? if so, you must be concerned about the emptiness you'll feel on your flight back home in a couple of days. do not worry! Wanmo and i have a new textbook on linear algebra for you to read, enjoy and cry on your long flight. (1/5)
Kyunghyun Cho tweet media
English
20
240
1.7K
200.5K
Geewook Kim retweetledi
Seongyun Lee
Seongyun Lee@sylee_ai·
I’m thrilled to announce that Prometheus-Vision has been accepted to the ACL 2024 Findings! A huge thanks to all co-authors! See you in Bangkok 🇹🇭!
Seungone Kim@seungonekim

🤔How could you evaluate whether your Vision Language Model (VLM) is closely reaching the capabilities of GPT-4V? We’re excited to present 🔥Prometheus-Vision, the first open-source VLM specialized for evaluating other VLMs based on fine-grained scoring criteria, with co-lead @sylee_ai ! This is an exciting follow-up work of Prometheus, extending it to the multi-modal space.

English
0
4
23
1.4K
Geewook Kim retweetledi
Sungdong Kim
Sungdong Kim@SungdongKim4·
🤔 Do we always need a human preference for effective LLM alignment after an SFT stage? Our answer is NO 🙅‍♂️ We present a ✨preference-free alignment approach✨, leveraging an off-the-shelf retriever with effective regularizer functions: Regularized Relevance Reward (R^3). [1/n]
Sungdong Kim tweet media
English
1
47
152
11.8K
Geewook Kim retweetledi
Yossi Gandelsman
Yossi Gandelsman@YGandelsman·
Accepted to oral #ICLR2024! *Interpreting CLIP's Image Representation via Text-Based Decomposition* CLIP produces image representations that are useful for various downstream tasks. But what information is actually encoded in these representations? [1/8]
Yossi Gandelsman tweet media
English
10
66
455
32K
Geewook Kim retweetledi
AK
AK@_akhaliq·
Apple presents AIM Scalable Pre-training of Large Autoregressive Image Models paper page: huggingface.co/papers/2401.08… paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective. These models are inspired by their textual counterparts, i.e., Large Language Models (LLMs), and exhibit similar scaling properties. Specifically, we highlight two key findings: (1) the performance of the visual features scale with both the model capacity and the quantity of data, (2) the value of the objective function correlates with the performance of the model on downstream tasks. We illustrate the practical implication of these findings by pre-training a 7 billion parameter AIM on 2 billion images, that achieves 84.0% on ImageNet-1k with a frozen trunk. Interestingly, even at this scale, we observe no sign of saturation in performance, suggesting that AIM potentially represents a new frontier for training large-scale vision models. The pre-training of AIM is similar to the pre-training of LLMs, and does not require any image-specific strategy to stabilize the training at scale.
AK tweet media
English
6
109
598
90.4K
Geewook Kim retweetledi
Seungone Kim
Seungone Kim@seungonekim·
🤔How could you evaluate whether your Vision Language Model (VLM) is closely reaching the capabilities of GPT-4V? We’re excited to present 🔥Prometheus-Vision, the first open-source VLM specialized for evaluating other VLMs based on fine-grained scoring criteria, with co-lead @sylee_ai ! This is an exciting follow-up work of Prometheus, extending it to the multi-modal space.
Seungone Kim tweet media
English
3
44
145
25.7K
Geewook Kim retweetledi
Odashi
Odashi@odashi_t·
短い質問文に対してWikipediaに書いてある情報のみで回答させる、というのを1000問前後実施し、人手retrieval付きQAデータセットを作りました。途中の過程や引用なども記録しているので、人間による検索のシミュレーションをデータから検討したりできると思います。 huggingface.co/datasets/baoba…
日本語
1
52
274
37.9K
Geewook Kim retweetledi
elvis
elvis@omarsar0·
Improving Information Retrieval in LLMs One effective way to use open-source LLMs is for search tasks, which could power many other applications. This work explores the use of instruction tuning to improve a language model's proficiency in information retrieval (IR) tasks. Proposes a large instruction tuning dataset that contains 21 tasks across IR. Results indicate that the dataset improves the performance of LLMs like Mistral and Phi on search-related tasks. More analysis in the paper such as measuring the impact of base model selection and instruction design.
elvis tweet media
English
7
155
639
74.6K
Geewook Kim retweetledi
Hiroyuki Deguchi
Hiroyuki Deguchi@de9uch1_·
ということで,自動でACL Anthologyからanthology.bibを落として2分割するだけのBashワンライナー作りました. コピペして叩けば分割されたbibが生成されます. 汚いのでだれか作り直してください. gist.github.com/de9uch1/7af9a2…
日本語
0
3
14
1.9K
Geewook Kim retweetledi
Seongyun Lee
Seongyun Lee@sylee_ai·
We are excited to introduce 🌋 Volcano, a multimodal model that revises hallucination in responses through self-feedback. It achieves state-of-the-art on multimodal hallucination benchmarks.
Seongyun Lee tweet media
English
2
28
73
10.3K
Geewook Kim retweetledi
Seungone Kim
Seungone Kim@seungonekim·
Excited to present 🔥Prometheus, a fully open-source evaluator LM that is on par with GPT-4 evaluation when the “appropriate” reference materials are appended! * Could generalize to customized score rubrics * Shows high correlation with both human evaluators & GPT-4 evaluation
English
9
51
342
121.1K