Bolin Lai

60 posts

Bolin Lai banner
Bolin Lai

Bolin Lai

@bryanislucky

PhD student @GeorgiaTech with research interest in multimodal understanding and image/video generation. I'm now visiting @CSL_Ill in UIUC for collaborations.

Atlanta, GA Katılım Nisan 2017
243 Takip Edilen131 Takipçiler
Sabitlenmiş Tweet
Bolin Lai
Bolin Lai@bryanislucky·
Our paper was nominated in the Best Paper Finalist of #ECCV2024. I sincerely thank all co-authors. Our work was also reported by Georgia Tech @ICatGT . My advisor @RehgJim will present it on Oct 2 1:30pm at Oral 4B Session, and Oct 2 4:30pm at #240 of Poster Session.@eccvconf
Bolin Lai tweet media
Georgia Tech School of Interactive Computing@ICatGT

LEGO can show you how it's done! New @eccvconf work from @bryanislucky, a new generative tool can produce visual images to accompany step-by-step instructions with just a single first-person photo uploaded into the prompt. #wecandothat🐝 @GTResearchNews b.gatech.edu/47RT3bN

English
0
5
38
6.6K
Bolin Lai retweetledi
Georgia Tech Computing
Georgia Tech Computing@gtcomputing·
Howdy from Nashville, ya'll! 🎸🤠 Check out our stars at #CVPR2025, a top @IEEEorg research venue for computer vision experts presenting their work on how computers interpret the world using image and video data! Tech’s experts will take center stage this week at @CVPR at the Music City Center to share their breakthroughs in computer vision. @GeorgiaTech is in the top 10% of all organizations for first authors and the top 4% for number of papers. More than 2000 organizations have research accepted into the main program. Tech's first authors include Chengyue Huang, Bolin Lai, Fiona Ryan, Andrew Szot, Lifu Wang, Lex Whalen, and Haoran You. @ICatGT faculty represent the majority of faculty in the papers program. Yeehaw! Meet all of our experts now 🔗: #spotlight" target="_blank" rel="nofollow noopener">sites.gatech.edu/research/#spot#GTComputing #TogetherWeCompute #ChangeTheGame
Georgia Tech Computing tweet mediaGeorgia Tech Computing tweet media
English
0
2
18
1.5K
Jeff Liang
Jeff Liang@LiangJeff95·
Join Meta as a full-time Research Scientist. 一颗赛艇!
Jeff Liang tweet mediaJeff Liang tweet media
中文
2
0
30
1.8K
Bolin Lai
Bolin Lai@bryanislucky·
The full Llama4 will contain 2T parameters. This is quite amazing to learn "billion" is insufficient to describe the scale of LLMs.
AI at Meta@AIatMeta

Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model with 16 experts. • Industry-leading context window of 10M tokens. • Outperforms Gemma 3, Gemini 2.0 Flash-Lite and Mistral 3.1 across a broad range of widely accepted benchmarks. Llama 4 Maverick • 17B-active-parameter model with 128 experts. • Best-in-class image grounding with the ability to align user prompts with relevant visual concepts and anchor model responses to regions in the image. • Outperforms GPT-4o and Gemini 2.0 Flash across a broad range of widely accepted benchmarks. • Achieves comparable results to DeepSeek v3 on reasoning and coding — at half the active parameters. • Unparalleled performance-to-cost ratio with a chat version scoring ELO of 1417 on LMArena. These models are our best yet thanks to distillation from Llama 4 Behemoth, our most powerful model yet. Llama 4 Behemoth is still in training and is currently seeing results that outperform GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM-focused benchmarks. We’re excited to share more details about it even while it’s still in flight. Read more about the first Llama 4 models, including training and benchmarks ➡️ go.fb.me/gmjohs Download Llama 4 ➡️ go.fb.me/bwwhe9

English
0
0
1
236
Bolin Lai
Bolin Lai@bryanislucky·
🔎In addition, when different exemplar image pairs are used with the same textual instruction, InstaManip can capture the different visual patterns and apply them in editing query images. [7/8]
Bolin Lai tweet media
English
1
0
0
136
Bolin Lai
Bolin Lai@bryanislucky·
📢#CVPR2025 Introducing InstaManip, a novel multimodal autoregressive model for few-shot image editing. 🎯InstaManip can learn a new image editing operation from textual and visual guidance via in-context learning, and apply it to new query images. [1/8] bolinlai.github.io/projects/Insta…
English
1
4
11
962
Bolin Lai retweetledi
AK
AK@_akhaliq·
Alibaba just dropped Wan2.1 open AI Video Generation #1 on VBench leaderboard, outperforming SOTA open-source & commercial models Mastery in complex motion dynamics & physics simulation & text rendering
English
67
145
727
75.7K
Bolin Lai retweetledi
Max Xu
Max Xu@maxxu05·
My paper RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data, from my @Apple internship, has been accepted at #ICLR2025! 🎉 We introduce the first IMU foundation model, unlocking generalization across motion tasks. 🏃‍♀️📊 arxiv.org/abs/2411.18822
English
1
9
17
1.8K
Bolin Lai retweetledi
Georgia Tech Computing
Georgia Tech Computing@gtcomputing·
#ECCV2024 has honored this computer vision research as one of 15 Best Paper Award candidates 🎉! Congrats to the team and lead author Bolin Lai, PhD student in Machine Learning at @GeorgiaTech.
Georgia Tech Computing tweet mediaGeorgia Tech Computing tweet media
Georgia Tech School of Interactive Computing@ICatGT

LEGO can show you how it's done! New @eccvconf work from @bryanislucky, a new generative tool can produce visual images to accompany step-by-step instructions with just a single first-person photo uploaded into the prompt. #wecandothat🐝 @GTResearchNews b.gatech.edu/47RT3bN

English
0
2
14
2.6K
Bolin Lai
Bolin Lai@bryanislucky·
@LiangJeff95 我是真菜,jeff整不动我应该也不太行🌚
中文
0
0
0
44
Jeff Liang
Jeff Liang@LiangJeff95·
开始找工作半个月了,我对Anthropic刮目相看,效率实在是太高了🫡,24小时内就直接把我简历拒了。😂
Jeff Liang tweet media
中文
1
0
5
929