Mike Dereviannykh

1.8K posts

Mike Dereviannykh

Mike Dereviannykh

@Mishok2000

Neural Rendering at Meta, soon Stability AI, PhD Student at KIT; ex: Eagle Dynamics, R&D Graphics Engineer.

Karlsruhe, Germany Katılım Mart 2015
1.2K Takip Edilen892 Takipçiler
Sabitlenmiş Tweet
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
We've received an "Honorable Mention" at the Eurographics 2025 for our work on "Neural Two-Level Monte Carlo Real-Time Rendering" in London! 🥳 Huge thanks to everyone who supported me along the way, and to the EG chairs, committee, and organizers for this recognition
Mike Dereviannykh tweet media
Mike Dereviannykh@Mishok2000

🚨 CG Paper, EG 2025 As scenes & lighting in games grow in complexity, we introduce Neural Incident Radiance Cache (NIRC) – a real-time, online-trainable cache that: 🚄 Costs just ~1ms/neural-sample for 1080p ☘️ Decreases MC variance 🥳 Saves on bounces youtube.com/watch?v=Y791Sl…

English
3
1
27
4.1K
Mike Dereviannykh retweetledi
Jonathan Granskog
Jonathan Granskog@jongranskog·
Fair, there is a lot of research "re-discovering" graphics concepts, but also so much of ML research nowadays is about connecting ideas together in a way that makes sense, rather than inventing completely new algorithms.
Sebastian Aaltonen@SebAaltonen

It's funny that AI engineers are re-discovering all the graphics programming tricks. I remember reading the NeRF and Gaussian Splat papers long time ago, and found a lot of inefficiencies that we fixed years ago. Later they have been optimizing the techniques.

English
2
1
17
1.5K
Mike Dereviannykh retweetledi
Gordon Wetzstein
Gordon Wetzstein@GordonWetzstein·
High-resolution image and video generation is hitting a wall because attention in DiTs scales quadratically with token count. But does every pixel need to be in full resolution? Introducing Foveated Diffusion: a new approach for efficient diffusion-based generation that allocates compute where it matters most. 1/7🧵
English
24
112
1.1K
145K
Mike Dereviannykh retweetledi
Tim Sweeney
Tim Sweeney@TimSweeneyEpic·
In the coming days, employers will see a stream of resumes of once-in-a-lifetime quality folks. An important thing to understand is that Epic never lowered our hiring standards as we grew, and the layoff wasn't a performance-based "rightsizing" as companies call it nowadays. It's a sound bet that anyone with Epic Games on their resume is in the top few percent of their discipline.
Dean Takahashi@deantak

Sad news for Epic Games. Hope the efforts to turn things around can work. gamesbeat.com/epic-games-lay…

English
5.8K
397
7.1K
6.1M
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@oliemack I totally share your vision. I think it's totally ok to bring some inductive biases via a few small ray-traced lighting references or any other world space representation, to make it work better in the short term. But in long-term it may be undervalued by having better models
English
0
0
1
44
Oliver Mackenzie
Oliver Mackenzie@oliemack·
@Mishok2000 In the end probably the solution is perhaps dumber, like the model just gets better at inferring lighting and various non-necessary lighting systems wither away, while the inputs given to the model matter less and less over time.
English
1
0
2
75
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
I spent some time looking into DLSS 5 rendering from the pure tech side - so ignoring the artistic debate for a second, I think it may already be showing early signs of something much more interesting: - Implicit Inverse Rendering
Mike Dereviannykh tweet media
English
24
33
369
65.3K
Mike Dereviannykh retweetledi
Hao Zhang
Hao Zhang@haozhangml·
Hot take: the metaverse was never a bad idea. It was just too early, and built with the wrong stack. The breakthrough version of the metaverse won’t be a manually designed world with heavy headsets. It should be a world generated on demand and responsive to what the user want to see, do, create, and feel. Imagine environments that reshape instantly from language, mood, memory, collaboration, or story. That is a totally different product experience from traditional rendering. So, real-time video diffusion is one of the key unlocks for that future here. Once generation is faster than consumption, we are no longer exploring a static world, but you are basically live- and vibe-directing it. That’s a big reason why I’m spending so much time on real-time diffusion / FastVideo right now. Super bullish on this direction
Hedgeye@Hedgeye

JUST IN: Meta announces they'll be shutting down the Metaverse

English
3
9
74
13.1K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@JTCoz @Evenios I'm just guessing like everyone else, nothing more could be the case of the 2nd model present. It may be slow from a performance perspective
English
0
0
1
38
JamTart
JamTart@JTCoz·
@Mishok2000 @Evenios are you absolutely certain the faces are the result of separate training data? my assumption was similar: single model isolating materials and lighting elements and using some PT truth to fill in details. why couldn't faces be on same basis? they're still materials that are lit!
English
2
0
1
35
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@whereisaaron @Evenios I **guess** it's one model... but the dataset with human faces wasn't captured in the same way as GT renders of interior\exterior scenes
English
0
0
0
346
AaronTheDiver
AaronTheDiver@whereisaaron·
@Mishok2000 @Evenios I thought I read that DLSS5 was using a sub-model/agent specifically for faces? Which is why they get treatments that are sometimes jarringly different from the scene. Be nice to see that face model turned off, just treat faces & hair same way as other inferred scene materials.
English
1
0
0
350
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@KatraApplesauce That's why, when I was reading about geometric control, I was like: "sure, it makes total sense even via pure RGB" As modern models are capable of predicting geometry themselves... and sometimes may do it even better than the approximated render signal x.com/AutismCapital/…
Autism Capital 🧩@AutismCapital

🚨NEW: Jensen Huang on DLSS 5 Haters "They're completely wrong. DLSS 5 fuses controllability of geometry and texture with generative AI, which you can fine tune to make it your artistic style, it's up to you. It's conditioned by the truth of the game."

English
2
0
1
62
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@KatraApplesauce exactly! It's the same as saying: VGGT is just a multipurpose "image filter". because it doesn't require albedo, roughness, depth, and other modalities as input, except for RGB maps Thaaaat's the key point. It can support a broader set of scenes, unique materials, VFX
Mike Dereviannykh tweet media
English
1
0
2
140
Geddings
Geddings@Evenios·
@Mishok2000 i think the longer demos that people might bother to look at would see its NOT ai slop.
English
1
0
22
3.7K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@compusemble Now it starts to sound like an interesting research direction for a new Siggraph paper 🤔
English
0
0
1
64
Mike Dereviannykh retweetledi
Compusemble
Compusemble@compusemble·
NVIDIA's DiffusionRenderer does this too. A dual inverse & forward neural rendering framework where the inverse renderer uses video diffusion model priors to estimate G-buffers from 2D images/videos & the forward renderer generates photorealistic images from those estimates 🤔
Compusemble tweet media
Mike Dereviannykh@Mishok2000

I spent some time looking into DLSS 5 rendering from the pure tech side - so ignoring the artistic debate for a second, I think it may already be showing early signs of something much more interesting: - Implicit Inverse Rendering

English
2
6
32
2.5K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
TLTR: DLSS 5.0 has changed my feelings about the need to include the whole "world description" into the context window
English
1
0
35
5K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
@diegopjaccottet I guess we'll definitely get there with having more and more control. Especially related to tonemapping, color shifting, and text prompts as well At least I don't see any constraints from the tech side, if we can bake the final style into LoRa w, preserving efficient performance
English
0
0
10
4.2K
Diego P. Jaccottet
Diego P. Jaccottet@diegopjaccottet·
@Mishok2000 DLSS 5 is very impressive, but Nvidia should have given the option for developers to create LoRas or fine-tune the model to maintain their artistic vision for their games.
English
6
0
14
4.6K
Nathan Benaich
Nathan Benaich@nathanbenaich·
News! @airstreet has raised $232,323,232 for Fund III to back AI-first companies from the earliest stages in the US and Europe. Now the largest solo GP venture firm in Europe. Our third epoch begins today. Join us!
Nathan Benaich tweet media
English
175
57
978
129K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
I used a few figures from "Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising" Jon Hasselgren, Nikolai Hofmann and Jacob Munkberg1 please, don't hesistate to check it out nvlabs.github.io/nvdiffrecmc/
English
1
0
26
5.8K
Mike Dereviannykh
Mike Dereviannykh@Mishok2000·
If that’s roughly true, then the big future question is: which rendering signals are truly essential for the model, and which ones are just expensive legacy computation? - Specular? Sure - Coarse diffuse GI - maybe - Local AO, Reflections, diffuse GI - don't think so - Geometry?
English
6
0
33
6.3K