Yujia Chen

34 posts

Yujia Chen

Yujia Chen

@IssacCyj

AI @Google

Katılım Mayıs 2023
37 Takip Edilen56 Takipçiler
Yujia Chen
Yujia Chen@IssacCyj·
Insert a video into a video with motion and identity awareness. Proud of this work! Split-then-Merge is a cool step forward for video composition. Great teamwork Ozgur!
Özgür Kara@ozgurkara99

🎥 Introducing Split-then-Merge: A new video composition framework! This approach enables the composition of any foreground video with any background video. Unlike conventional methods that rely on annotated datasets or handcrafted rules, Split-then-Merge (StM) splits a large unlabeled corpus of videos into dynamic foreground and background layers, then merges them to learn how dynamic subjects interact with diverse scenes. Work done in collaboration with team members at @Google: Du Tran (@dutran) , Yujia Chen (@IssacCyj) , Prof. Ming-Hsuan Yang (@MingHsuanYang), Vincent Chu: and my advisor at UIUC (@siebelschool): Prof. James M. Rehg (@RehgJim). I will be attending NeurIPS, San Diego and would be happy to chat more! 🔗Project Webpage: split-then-merge.github.io 📄Paper: arxiv.org/abs/2511.20809

English
0
0
3
338
Yujia Chen retweetledi
Nataniel Ruiz
Nataniel Ruiz@natanielruizg·
today we are releasing new research at Google. we tackle the previously unsolved task of editing motion in an existing video. it's called MotionV2V. with it you can move objects in videos, move the camera, and other unprecedented edits in user-provided video
GIF
English
11
42
178
17.8K
Nataniel Ruiz
Nataniel Ruiz@natanielruizg·
chatgpt has become very slow for me. claude and gemini are much faster, with gemini being the fastest response. It’s easy to get used to the speed
English
3
0
12
1.3K
Yujia Chen
Yujia Chen@IssacCyj·
Like the idea
Zhengzhong Tu@_vztu

📍 𝗖𝗮𝗻 𝗔𝗜 𝗡𝗮𝘃𝗶𝗴𝗮𝘁𝗲 𝗠𝗮𝗽𝘀 𝗟𝗶𝗸𝗲 𝗛𝘂𝗺𝗮𝗻𝘀 𝗗𝗼? 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗠𝗮𝗽𝗕𝗲𝗻𝗰𝗵! 🗺️🤖 𝘙𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘱𝘴, like Google Maps and Theme Park Maps, is second nature for humans. It is a highly challenging task that requires visual understanding, spatial reasoning, and long-horizon planning. We're curious - 𝗖𝗮𝗻 𝗟𝗮𝗿𝗴𝗲 𝗩𝗶𝘀𝗶𝗼𝗻-𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 (𝗟𝗩𝗟𝗠𝘀) 𝗱𝗼 𝗶𝘁 𝘁𝗼𝗼? 🤔 We’re excited to share 𝗠𝗮𝗽𝗕𝗲𝗻𝗰𝗵, the first-ever dataset and benchmark specifically designed for evaluating how well LVLMs perform on pixel-based map navigation tasks! 🚀 🔑 𝗪𝗵𝘆 𝗠𝗮𝗽𝗕𝗲𝗻𝗰𝗵 𝗶𝘀 𝗮 𝗚𝗮𝗺𝗲-𝗖𝗵𝗮𝗻𝗴𝗲𝗿: • 📌 1600+ Complex Pathfinding Queries from 100 uniquely challenging map scenarios (urban areas, theme parks, universities, malls, and more). • 📌 Introduces Map Space Scene Graph (MSSG): a novel data structure for mapping visual landmarks and spatial relationships to structured navigation tasks. • 📌 Evaluates state-of-the-art LVLMs like GPT-4o, Llama-3.2, and Qwen-2-VL under zero-shot and Chain-of-Thought (CoT) reasoning methods, revealing key insights into their spatial reasoning and navigation abilities. 🚩 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀: • Despite their impressive capabilities, current LVLMs struggle significantly with spatial reasoning and structured decision-making. • CoT prompting boosts spatial reasoning performance but sometimes introduces redundant details. 👀 𝗖𝗵𝗲𝗰𝗸 𝗼𝘂𝘁 𝗼𝘂𝗿 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀, 𝗱𝗮𝘁𝗮𝘀𝗲𝘁, 𝗮𝗻𝗱 𝗰𝗼𝗱𝗲 𝗵𝗲𝗿𝗲: 🔗 Arxiv: lnkd.in/gBv-sFJ3 Huge thanks to our incredible collaborators for making this happen, from @TAMU, @UCBerkeley, @mbzuai, @UMich, and @UCRiverside! 🎉 Let’s continue to bridge the gap between human intuition and AI navigation! 🗺️💡

English
0
0
1
109
Yujia Chen retweetledi
Jia-Bin Huang
Jia-Bin Huang@jbhuang0604·
Some papers rejected due to "incremental novelty" 🫠 We as a community should emphasize less on being novel and more on being simple, interesting, and useful.
English
9
40
420
32.1K
Yujia Chen retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
This is interesting as a first large diffusion-based LLM. Most of the LLMs you've been seeing are ~clones as far as the core modeling approach goes. They're all trained "autoregressively", i.e. predicting tokens from left to right. Diffusion is different - it doesn't go left to right, but all at once. You start with noise and gradually denoise into a token stream. Most of the image / video generation AI tools actually work this way and use Diffusion, not Autoregression. It's only text (and sometimes audio!) that have resisted. So it's been a bit of a mystery to me and many others why, for some reason, text prefers Autoregression, but images/videos prefer Diffusion. This turns out to be a fairly deep rabbit hole that has to do with the distribution of information and noise and our own perception of them, in these domains. If you look close enough, a lot of interesting connections emerge between the two as well. All that to say that this model has the potential to be different, and possibly showcase new, unique psychology, or new strengths and weaknesses. I encourage people to try it out!
Inception@_inception_ai

We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.

English
375
1.5K
11.5K
941.7K
Yujia Chen retweetledi
Mike Bespalov
Mike Bespalov@bbssppllvv·
It’s live! After some final tweaks ASCII converter is officially ready. Turn any image into ASCII art instantly codepen.io/Mikhail-Bespal…
English
196
735
8.1K
660.3K
Yujia Chen retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
o3-mini might be the best LLM for real-world physics. Prompt: "write a python script of a ball bouncing inside a tesseract"
English
122
235
2.5K
1.2M
Yujia Chen retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥 We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through @LabsDotGoogle. → goo.gle/veo-2-imagen-3
Google DeepMind tweet mediaGoogle DeepMind tweet media
English
264
1.3K
7K
2.3M
Nataniel Ruiz
Nataniel Ruiz@natanielruizg·
happy that i've been promoted to senior research scientist at google. now on to making cool things.
English
35
1
324
19.2K
Yujia Chen retweetledi
A.I.Warper
A.I.Warper@AIWarper·
Using @logtdx implementation of RF-Inversion by @Google and @litu_rout_ and @natanielruizg I think there may be a method here for consistent stylized animation frames. If we could somehow just align these grids it would be very powerful Grid in the second tweet
GIF
English
6
3
44
6.9K
Yujia Chen retweetledi
AK
AK@_akhaliq·
Open-MAGVIT2 An Open-Source Project Toward Democratizing Auto-regressive Visual Generation paper page: huggingface.co/papers/2409.04… We present Open-MAGVIT2, a family of auto-regressive image generation models ranging from 300M to 1.5B. The Open-MAGVIT2 project produces an open-source replication of Google's MAGVIT-v2 tokenizer, a tokenizer with a super-large codebook (i.e., 2^{18} codes), and achieves the state-of-the-art reconstruction performance (1.17 rFID) on ImageNet 256 times 256. Furthermore, we explore its application in plain auto-regressive models and validate scalability properties. To assist auto-regressive models in predicting with a super-large vocabulary, we factorize it into two sub-vocabulary of different sizes by asymmetric token factorization, and further introduce "next sub-token prediction" to enhance sub-token interaction for better generation quality. We release all models and codes to foster innovation and creativity in the field of auto-regressive visual generation.
AK tweet media
English
1
45
261
40.7K