Alex Gorin

1.3K posts

Alex Gorin

Alex Gorin

@anklovee

Digital artist. Proud member of CLAN. https://t.co/dp9M5dqAcy https://t.co/Lu0R3uJWtA

Katılım Temmuz 2010
377 Takip Edilen703 Takipçiler
Alex Gorin retweetledi
R@aiaicreate
R@aiaicreate@aiaicreate·
ComfyUI Panorama Stickersが更新。 ビデオサポート、180度・360度パノラマに対応。 LTX-2.3 360 VR LoRAのプレビュー機能強化。 将来の3DシーンサポートやパノラマIC-LoRA作成も展望。 内部構造の最適化も実施。フィードバック歓迎。 #ComfyUI #PanoramaStickers URLはリプ⬇️
日本語
3
22
199
9.2K
Alex Gorin retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Holy crap, NVIDIA just made it drastically easier to create large scale explorable 3d worlds. No manual stitching of smaller 3d generations like other 3d models. Lyra 2.0 looks pretty damn impressive.
English
16
150
1.4K
93.9K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
Heck yes! ComfyUI getting native SAM 3.1 support. -multiplex video tracking; - text-conditioned detection of new/occluded objects; - optimized for single-GPU, no extra dependencies; - bit-packed masks, object ID overlays. thanks to Kijai! github.com/Comfy-Org/Comf…
Wildminder tweet mediaWildminder tweet media
English
7
40
336
17.6K
Alex Gorin retweetledi
R@aiaicreate
R@aiaicreate@aiaicreate·
IC-LoRA-Detailerの適切な使用法が焦点。動画レンダリング後のポストプロセス用LoRAと判明。ControlNetに似たVideo to Videoでの効果適用が本来の機能。従来のWAN2GP実装は低解像度2段階、ComfyUIなどではフル解像度1段階プロセスが主流との指摘。 #StableDiffusion #LoRA URLはリプ⬇️
日本語
1
14
105
5.3K
Alex Gorin retweetledi
makeitrad
makeitrad@makeitrad1·
LTX 2.3, man... distilled. Ten seconds of video, 1080p, three minutes. Three minutes, man. That's... I mean, who does that? That's unreal. LTX 2.3 by @Lightricks Lebowski Lora trained on @ostrisai Ai toolkit Dialog created w/ @NousResearch Hermes Agent Im in love 🤎 sound on 🔈🔉🔊
English
32
40
547
27.1K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
freaking massive! LPM 1.0 -17B DiT for real-time, full-duplex conversational video generation. Avatars on huge steroids. - synchronized speaking, listening - 0.35s latency; - identity consistency for 10+ mins; - DMD distillation for 2-step online generation; - 480P/720P at 24 fps LiveAvatar, Kling-Avatar-2, OmniHuman -cooked. large-performance-model.github.io
English
7
46
303
18.8K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
ComfyUI Post-processing suite for photorealism. - simulates sensor noise, analog artifacts, camera metadata; - base64 EXIF transfer; - calibrated DNG writing; - HEIC/RAW loading; github.com/thezveroboy/Co…
Wildminder tweet media
English
4
29
268
13.7K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
Interesting LTX2.3 Cameraman LoRA. Transfers camera motion from reference vids to new scenes. text-prompted generation, no trigger words. huggingface.co/Cseti/LTX2.3-2…
English
3
40
272
15.6K
Alex Gorin retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
SOMEONE JUST KILLED EVERY SKETCHY VIDEO DOWNLOADER SITE ON THE INTERNET. It's called ReClip. Self-hosted. Open source. Free. Paste a link from YouTube, TikTok, Instagram, Twitter/X, or 1000+ other sites. Download as MP4 or MP3. That's it. No ads. No popups. No trackers. No rate limits. No account. No sketchy installer. Your machine. Your downloads. Your data. Here is the full feature set: -> 1000+ supported sites including YouTube, TikTok, Instagram, and Twitter/X -> MP4 or MP3 output, your choice -> Resolution selector before you download -> Batch download. Paste multiple links at once. -> Clean web UI that runs in any browser -> Lightweight self-hosted setup It went from 0 to 1.4K GitHub stars in 9 days. 239 forks already. Here is why that number matters: Every person who found a janky ad-covered downloader site in the last 10 years was waiting for this. A clean, private, self-hosted alternative that just works. No business model built on your data. No premium tier. No upsells. Built in HTML. MIT License. Two contributors. Ships in days. 100% Open Source. Free forever.
Muhammad Ayan tweet media
English
248
1.4K
13.3K
780.4K
Alex Gorin retweetledi
Sadao Tokuyama
Sadao Tokuyama@tokufxug·
Facebookやインスタ、VRのQuestやAIグラスのRay-Ban MetaのMetaによる新研究「LCA」。 100万本の動画学習により、スマホ撮影から表情や指の動きまで精密な3Dアバターを一瞬で生成可能に。 服の揺れや照明変化も自然に再現され、将来的にスタジオ級の分身がスマホで作れる世界を目指してる模様。
日本語
6
63
444
32.1K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
AvatarPointillist. Autoregressive 4D Gaussian avatar generation. - drivable 3D from single portraits; - dynamically adjusts point density for hair/beards; uses DINOv2 and FLAME for animation. good identity preservation kumapowerliu.github.io/AvatarPointill…
English
4
47
344
17.7K
Alex Gorin retweetledi
3D Scanstore
3D Scanstore@Ten_24·
Early R&D using our SP-6M dataset. Exploring image-to-3D reconstruction from single images, including heavily modified inputs (lighting, hair, etc). Still a work in progress.
English
26
66
604
50.8K
Alex Gorin retweetledi
Wildminder
Wildminder@wildmindai·
PoseDreamer. Photorealistic human images with 3D mesh annotations. - FLUX.1-Dev; - hits 1.72 FID and 9.78 IS; Optimized for training 3D pose estimation and AR/VR avatar systems. prosperolo.github.io/posedreamer/
Wildminder tweet media
English
2
15
120
6.5K
Alex Gorin retweetledi
Matthias Niessner
Matthias Niessner@MattNiessner·
📢GaussianGPT: autoregressive 3D Gaussian scene generation. We introduce a GPT-style model that directly generates 3D Gaussian scenes, token by token, in a series of small, discrete decision steps. Generation, completion, and large-scale outpainting in a single pipeline. Unlike diffusion-based approaches, GaussianGPT explicitly models the scene distribution at every step, allowing for quite flexible scene synthesis. 🌐 nicolasvonluetzow.github.io/GaussianGPT/ ▶️ youtu.be/zVnMHkFzHDg Great work by @nicolasvluetzow, @barbara_roessle, @katha_schmid
YouTube video
YouTube
English
36
296
2.4K
150K
Alex Gorin retweetledi
Paul
Paul@SwedPaul·
Майкрософт опенсорснул модельку, которая вполне может перевернуть рынок геймдева и три-дэ разработки А именно: превращение два-дэ изображений в настоящий три-дэ GLB. И работает это как минимум впечатляюще microsoft.github.io/TRELLIS.2/
Русский
39
135
1.6K
125.7K
Alex Gorin retweetledi
Jeff Li
Jeff Li@jiefengli_jeff·
There are many human body models, SMPL, MHR, Anny… which one should you use? Answer: all of them. At GTC 2026, we release SOMA, a unified body layer that takes any model's shape and pose, and gives you one canonical mesh and rig.
Umar Iqbal@UmarIqb

#NVIDIA just released a whole ecosystem for human(oid) motion and robot learning from human data. 🚀🦾 Data, as we all know, is the key to scaling AI models. To accelerate the field of Embodied AI, we have open-sourced a full stack of models and tools to capture, generate, retarget, and simulate human(oid) motion data at scale, along with a massive high-quality dataset and a standard human skeletal representation, SOMA, to make them all seamlessly communicate with each other. The entire suite is available under the Apache 2.0 license. 1️⃣ SOMA: A universal interface to unify all parametric human body models (SOMA-shape, SMPL, MHR, etc.) into a standard skeletal representation, eliminating the need for custom adapters or model-specific retargeting. 🔗 lnkd.in/gsxhiJnn 2️⃣ Kimodo: High-fidelity, controllable text-to-motion generation for both humans and humanoid robots. 🔗 lnkd.in/gCc84XnX 3️⃣ GEM: A global human pose estimation method from in-the-wild videos, natively compatible with SOMA. 🔗 lnkd.in/g_QAvRjn 4️⃣ Bones-SEED: A massive dataset of 150k+ motions in SOMA format, including data already retargeted for the Unitree G1, created with our partners at Bones Studio. 🔗 lnkd.in/gfx-QD-w 🔗 lnkd.in/gyNdTwQx 5️⃣ SOMA Retargeter: A dedicated tool for seamless motion retargeting from the SOMA skeleton to the Unitree G1. 🔗 lnkd.in/gqz9Na-H 6️⃣ ProtoMotions: Our high-performance simulation framework for training digital human(oid)s via RL, now with native SOMA support. 🔗 lnkd.in/gmvMikMU This is just the beginning, and we have much more in the pipeline. Excited to see what the community builds next! #NVIDIA #GTC #GTC2026 #Robotics #EmbodiedAI #PhysicalAI @NVIDIAAI

English
1
14
76
8.1K
Alex Gorin retweetledi
Umar Iqbal
Umar Iqbal@UmarIqb·
#NVIDIA just released a whole ecosystem for human(oid) motion and robot learning from human data. 🚀🦾 Data, as we all know, is the key to scaling AI models. To accelerate the field of Embodied AI, we have open-sourced a full stack of models and tools to capture, generate, retarget, and simulate human(oid) motion data at scale, along with a massive high-quality dataset and a standard human skeletal representation, SOMA, to make them all seamlessly communicate with each other. The entire suite is available under the Apache 2.0 license. 1️⃣ SOMA: A universal interface to unify all parametric human body models (SOMA-shape, SMPL, MHR, etc.) into a standard skeletal representation, eliminating the need for custom adapters or model-specific retargeting. 🔗 lnkd.in/gsxhiJnn 2️⃣ Kimodo: High-fidelity, controllable text-to-motion generation for both humans and humanoid robots. 🔗 lnkd.in/gCc84XnX 3️⃣ GEM: A global human pose estimation method from in-the-wild videos, natively compatible with SOMA. 🔗 lnkd.in/g_QAvRjn 4️⃣ Bones-SEED: A massive dataset of 150k+ motions in SOMA format, including data already retargeted for the Unitree G1, created with our partners at Bones Studio. 🔗 lnkd.in/gfx-QD-w 🔗 lnkd.in/gyNdTwQx 5️⃣ SOMA Retargeter: A dedicated tool for seamless motion retargeting from the SOMA skeleton to the Unitree G1. 🔗 lnkd.in/gqz9Na-H 6️⃣ ProtoMotions: Our high-performance simulation framework for training digital human(oid)s via RL, now with native SOMA support. 🔗 lnkd.in/gmvMikMU This is just the beginning, and we have much more in the pipeline. Excited to see what the community builds next! #NVIDIA #GTC #GTC2026 #Robotics #EmbodiedAI #PhysicalAI @NVIDIAAI
English
5
79
423
45.8K
Alex Gorin retweetledi
Alex Patrascu
Alex Patrascu@maxescu·
Many tried, most failed. But this is the first skin enhancer I've used that actually makes characters look real. Meet Vellum from @openart_ai It's now a staple in my workflow. I won't start a project without it:
English
51
99
1.1K
87.8K
Alex Gorin retweetledi
CYANPUPPETS
CYANPUPPETS@cyanpuppets·
A 1-billion-parameter AI real-time motion model that connects to a 1080P camera or uploaded videos, supports UE/Unity/Blender, and requires 8GB of VRAM for real-time processing.
English
15
109
1.1K
63.1K
Alex Gorin retweetledi
Tongyi Lab
Tongyi Lab@Ali_TongyiLab·
We are impressed by this new Z-Image-Turbo LoRA from the community! By utilizing Flow-DPO, it effectively eliminates "washed-out" artifacts and brings cinematic, physically accurate lighting to our ultra-fast distilled model. 🔹 The Magic: Stunning photorealism in just 8 inference steps. 🔹 The Tech: Finetuned for Flow Matching to fix "flat" textures. Huge props to the developer! Check out the details and try it yourself: huggingface.co/F16/z-image-tu…
Tongyi Lab tweet media
English
0
35
323
20.6K