Dr.sats

160 posts

Dr.sats banner
Dr.sats

Dr.sats

@LetmefuckXXX

Vegetarian | Life equality|Minimalist|#BTC holder since 2013

Katılım Temmuz 2019
4K Takip Edilen725 Takipçiler
KK说加密
KK说加密@KK4657856552306·
衰老、长痘、发胖、疲劳、睡眠差的底层原因是什么? 权威期刊:只需换掉一种主食,就能降低全身炎症水平#抗衰老 #抗炎 #women的健康我们帮 #亚健康养生我来帮
中文
17
176
600
94K
Dr.sats
Dr.sats@LetmefuckXXX·
@congyuecy_soft Neural rendering is interesting, but without a physically-based foundation, you lose truth. when you actually need truth, not just plausibility. Path tracing defines reality. DLSS 5 increasingly defines perception and maybe, soon, reality itself.
English
0
0
1
35
丛越
丛越@congyuecy_soft·
DLSS 5 恐怕会成为首个被游戏开发者们抵制的图形技术,手伸的太长了。
中文
1
1
1
112
Dr.sats
Dr.sats@LetmefuckXXX·
@GabRoXR @playcanvas @threejs Gaussian Splats are likely to become a visual substrate for world models rather than just a rendering format.If Gaussian Splats become the output of world models, then Octane @OTOY becomes the tool that tells us whether those worlds are physically real.
English
0
0
2
118
Gabriele Romagnoli
Gabriele Romagnoli@GabRoXR·
What is your prediction for 2026 when it comes to #GaussianSplatting? In my opinion, we will see more 3rd party platforms and tools supporting Splats to create simple, interactive, walkable experiences, while web-based engines like @playcanvas and @threejs will far surpass what can be done in @unity. About the video: This demo from Yiwei Chiang is more than just another splat viewer in #VR. The simple interactions and mechanics give us a glimpse of how this new #3D format is quickly evolving and turning into more sophisticated experiences. We are still at the beginning but being able to create an environment from just an image is unlocking so many opportunities for storytellers.
English
14
26
187
11.9K
Dr.sats retweetledi
Jan Orszulik
Jan Orszulik@JanOrszulik·
@DS2LightingMod One day realtime unbiased rendering will become standard and it will bring freedom for both stylized and realistic rendering we can only dream of.
English
1
1
20
3.4K
Dr.sats retweetledi
Hayssam Keilany
Hayssam Keilany@icelaglace·
After years of research & experiments, OctaneRender 2027 is achieving real-time path tracing while remaining its quality and spectral nature. I am extremely proud of my team all across the globe for this achievement!🔥 #Octane
English
27
63
690
81.6K
Dr.sats retweetledi
Octane Render Italia
Octane Render Italia@OctaneRenderIt·
OTOY reveals new Octane 2027 Roadmap and Beyond @OTOY forum with: - Real-time Neural Viewport rendering - Render to Gaussian Splat - Anime and Sketch Rendering - Generative PBR Materials - AI Light 2.0 for many light sampling - Python API and much more render.otoy.com/forum/viewtopi…
Octane Render Italia tweet media
English
1
25
123
43.4K
Dr.sats retweetledi
Hirokazu Yokohara
Hirokazu Yokohara@Yokohara_h·
World LabsのMarbleで生成した背景をc4dとoctane renderへ読み込んだときのビューポートこんな感じ octaneの3DGSインポート機能はじめて使ったけどライト置かずとも3Dモデルを何となく照らしてくれて便利 動画後半はライト置いた場合
日本語
7
53
479
34.5K
Dr.sats retweetledi
Spenser Dickerson
Spenser Dickerson@SpenserFX·
The new @theworldlabs Marble world-builder just dropped - and its Gaussian Splat exports fit perfectly inside @otoy Octane 2026. With practically no effort, you can drop the splats directly into Octane and they become fully functional 3D elements. Relight them, add geometry and shadows, use the collider mesh for dynamic control, or prep your own 360° and VR scenes - and so much more. Try Octane 2026 today and unlock the power of Gaussian splat workflows.
English
9
13
90
6.5K
Dr.sats retweetledi
FloFlo
FloFlo@Kneteknilch·
"Suddenly @rendernetwork shines from a new angle. Its capable of rendering movie scenes in hours, not weeks. Its capable of rendering complex simulation now […]" ... "[…] its the most convenient #render farm you will ever find - I guarantee it." _ $RENDER
English
1
11
48
2.5K
Dr.sats retweetledi
Jules Urbach
Jules Urbach@JulesUrbach·
Beta sign up for OTOY.AI is now live - I just presented major launch features and 6 month roadmap on stage at #BCON25 - we now have 700+ models (and more added daily)... open weight models that will run on consumer GPUs are being adapted for low cost + high scale on @rendernetwork #OTOYAI
English
35
98
311
102.9K
Dr.sats retweetledi
The Render Network
The Render Network@rendernetwork·
For SUBMERGE: Beyond the Render, Kuciara and Yang adapt their acclaimed @shibuyafilm anime White Rabbit into a spatial installation, rendering key scenes at 18K resolution to push the boundaries of immersive animation. The result is a hyperreal anime world brought to life across walls of light, blending emotion, memory, and speculative storytelling in a way that could only exist in an immersive format.
The Render Network tweet mediaThe Render Network tweet mediaThe Render Network tweet mediaThe Render Network tweet media
English
1
7
56
10.2K
Dr.sats retweetledi
The Render Network
The Render Network@rendernetwork·
“Combining the strengths of physical & digital art can give viewers entirely new ideas & confrontations…” - Gavin Shapiro In SUBMERGE: Beyond the Render @artechouse NYC @shapiro500 adapts his iconic style featuring delightful looping penguins and flamingos into an 18K immersive experience powered by @rendernetwork.
English
5
25
172
12.3K
Dr.sats
Dr.sats@LetmefuckXXX·
@stephenajason Amazing work on Pusa V1.0! I’m curious — is it possible to train the model on a consumer GPU (like an RTX 3090 or 4090)?
English
1
0
1
196
Yaofang Liu
Yaofang Liu@stephenajason·
🚀 ​​Pusa V1.0 Release Can you believe training a SOTA level Image-to-Video model with only $500 training cost? No way? But yes, we made it! And we achieved much more beyond that. We’re thrilled to release ​​Pusa V1.0​​—a paradigm shift in video generation, redefining video diffusion efficiency. With our novel ​​Vectorized Timestep Adaptation (VTA)​​ based on our prior FVDM work: 🔥 ​​Key Features: ✅Unprecedented Efficiency: - Surpasses Wan-I2V-14B with ≤ 1/200 of the training cost ($500 vs. ≥ $100,000) - Trained on a dataset ≤ 1/2500 of the size (4K vs. ≥ 10M samples) - Achieves a VBench-I2V score of 87.32% with 10 inference steps (vs. 86.86% for Wan-I2V-14B with 50 steps) ✅ Comprehensive Multi-task Support: VTA fully preserves Text-to-Video from the base model Wan-T2V, and after finetuning, Pusa V1.0 extends to the following all in a zero-shot way (no task-specific training): - Image-to-Video - Start-End Frames - Video completion/transitions - Video Extension - And more... ✅Complete Open-Source Release: - Full codebase and training/inference scripts - Model weights and dataset for Pusa V1.0 - Paper/ Tech Report with Detailed and Comprehensive Methodology 💡 ​​Scientific breakthrough​​: VTA enables granular temporal control via frame-level noise adaptation—no task-specific training needed. 🌍 ​​Fully open-sourced​​: • Codebase: github.com/Yaofang-Liu/Pu… • Project Page: yaofang-liu.github.io/Pusa_Web/ • Technical report: github.com/Yaofang-Liu/Pu… • Model weights: huggingface.co/RaphaelLiu/Pus… • Dataset: huggingface.co/datasets/Rapha… [1/n]
English
8
13
52
13K
Rendoshi 👽🛸
Rendoshi 👽🛸@Rendoshi1·
$NVDA pumping to new highs in pre-market 👀 The AI / GPU trade is starting again! Stack tokenised GPU compute $RENDER
Rendoshi 👽🛸 tweet media
English
4
15
93
5.7K
Dr.sats retweetledi
gonzzzalo
gonzzzalo@gonzzzalo_·
I've been trying @rendernetwork with Redshift lately and I have to say it's by far, the smoothest render farm experience I've ever had. It can be extremely cheap too if you're not on a rush & even the priority render doesn't costs that much, I'm so so impressed.
English
11
22
173
9.7K
Dr.sats retweetledi
OTOY
OTOY@OTOY·
Congrats to Claudio Miranda (ASC) on the release of F1! His cutting-edge cinematography takes viewersinside F1 cars at 200+ mph. Beginning with Top Gun: Maverick Octane X has helped his previz workflow push the boundary of visual immersion. Learn more: fxguide.com/fxfeatured/app…
English
3
37
237
226.7K