Naty Hoffman

5K posts

Naty Hoffman banner
Naty Hoffman

Naty Hoffman

@renderwonk

Real-Time Graphics Specialist (Retired)

Burlingame, CA Beigetreten Şubat 2010
250 Folgt7.5K Follower
Naty Hoffman
Naty Hoffman@renderwonk·
@SebAaltonen Since the AO solution you need is long-distance / low-frequency only (backed up by SSAO for the high-frequency stuff) couldn’t you bake or compute it at an extremely low spatial frequency? Wouldn’t that take care of the storage / memory / compute resource issues?
English
0
0
0
61
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
Increasing SSAO kernel isn't going to work, you get very visible screen space artifacts on screen edges. Screen space isn't good enough for large scale AO. We could use some sky-visibility approximation. Distance fields (sphere trace) would require building up distance field from the whole scene, and that takes memory. In a web page total RAM allocation is limited to 1.5GB. Also web page must load fast, so we can't bake big distance fields to disk (server). You could use hardware ray-tracing for it, but WebGPU doesn't have hardware ray-tracing, and we are targeting mobile phones as well. Performance would not be good enough. AC4: Blag Flag had top-down AO. Basically a pre-blurred shadowmap straight from top. Works well with big buildings casting long soft AO. But it's a hack. Not physically correct. And doesn't work in all scenes.
English
1
0
11
1.9K
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
The image looks otherwise pretty good but the area marked with red outline looks like shit. This is because we don't have no baked lighting, no realtime GI and no approximation for large scale AO / sky visibility. GTAO (screen space) only gives us high frequency occlusion...
Sebastian Aaltonen tweet media
English
4
0
58
5.4K
Naty Hoffman retweetet
I3D Symposium
I3D Symposium@I3DCONF·
To encourage more submissions we are extending the deadline for posters by one week, to 27 March 2026. Poster and demo notification is now extended to 3 April 2026. More information about poster submissions at: i3dsymposium.org/2026/cfp-poste… #I3D2026
English
0
3
2
692
Naty Hoffman
Naty Hoffman@renderwonk·
@SoftEngineer All CGI is fake. Some types of faking solve more problems than they create, and some are the other way around. The secret to rendering success is to know which is which - a classification that likely depends on the application.
English
0
0
1
133
Alex Goldring
Alex Goldring@SoftEngineer·
"Variable Rate Shading is not fake pixels" is the argument. It's an interesting take, but if you believe that TAAU is fake pixel or DLSS is fake pixels - you have to accept that VRS is too. VRS works by splitting the screen into tiny squares(tiles) of, say 2x2 pixels, and then assigning a pattern to that tile, we either shade 1 of the 4 pixels and fill in the rest, 2 of the 4 or all 4 where essentially the technique is disabled. There are other ways VRS can work, and the term itself is just the approach - implementations vary. These tiles are written into an image called SRI (Shading Rate Image), typically r8uint. We can argue whether filling in blank pixels that you don't render is "fake" or not. Personally I say yes, they are fake. But there's a more pressing issue - what do you use to build SRI? You can't use this frame's pixels because they haven't been produced yet, after all - you're building the SRI to tell you what to shade for this frame. So, you typically rely on the last frame's data. But last frame is not this frame. This is the same exact problem that TAA faces. If you have disocclusion, or any significant motion in the frame, or something like a bright flash from a gun muzzle or an explosion - your last frame data is going to be wrong. In a sense - because of this last frame dependency, VRS becomes a temporal technique where we predict the present state based on the past. Let's say the frame is fully static, it's not a typical scenario, but let's try to see if VRS is only skipping pixels that would "look the same anyway". Let's focus on a 1x4 tile, where only 1 pixel is drawn out of 4. When we draw the next frame - we're going to be missing data. The last frame was drawn with 1x4, so how can we judge if the SRI we're creating from it is correct or not? - we can't. If there is a tiny bit of gradient in that tile - we're going to lose it, it will turn into a uniformly colored square. So - we have to rotate the pattern each frame to make sure that the 1 pixel we actually draw falls into a different cell of the tile, in the sense - doing what TAA does. Next is the "look the same" issue. If you se up VRS to only skip the "same"-looking pixels, it will be practically useless. Very rarely in a realistically-lit scenario will you have truly identical pixels in a 2x2 square. So - we have to use a threshold. Not "same" but "similar" with some metric representing the threshold. If you want your VRS to be helpful - you need to tune this metric to discard at least 10% of the pixels on average. If you want your solution to be really good at improving performance, you'd be aiming for 30% and above. Notice that we're not talking about how similar the pixels look, but we're saying that we'll accept sacrifices and force the "similar" definition based on our performance target. And so - more useful the technique will be, more "fake" those extra pixels will be. Lastly, if you just go by the naive and intuitive logic of filling in skipped pixels - you will get blocky artifacts. Even when pixels are similar, or perhaps - especially on smooth backgrounds. So you have to cheat and interpolate across tile boundaries. This removes the artifacts - but you're now definitely cheating because the neighboring tile was never guaranteed to be "similar". And so - VRS is amazing, and I hope to one day implement it in Shade, but it's not magic and it's not "free compression".
James Stanard@JamesStanard

@SoftEngineer @idSoftware @DOOM For the variable rate shading, it’s not that the pixels are “fake”. It’s that they reuse shading computations for neighboring pixels that are going to look the same anyway. You can call the upscaling fake pixels, but it’s common practice.

English
4
0
44
6.5K
Naty Hoffman retweetet
Pixel Cherry Ninja
Pixel Cherry Ninja@PixelCNinja·
Atari’s Star Wars (1983) is the definitive vector-graphics masterpiece. Sit in the cockpit of an X-Wing and take on the Death Star in a wireframe galaxy far, far away. With digitized voices from the film, it’s a high-speed, immersive arcade triumph.
English
177
407
3.7K
224.7K
Naty Hoffman retweetet
I3D Symposium
I3D Symposium@I3DCONF·
We are pleased to announce that I3D 2026 will be hosted at Lucasfilm in May 2026. Conference Dates and Location: Lucasfilm - 1110 Gorgas Ave, San Francisco, California 94129, USA 13 (Wednesday) to 15 (Friday) of May, 2026 Conference Homepage: i3dsymposium.org/2026/ #I3D2026
I3D Symposium tweet mediaI3D Symposium tweet mediaI3D Symposium tweet mediaI3D Symposium tweet media
English
0
3
6
651
Naty Hoffman
Naty Hoffman@renderwonk·
Congratulations to the winners of this year’s Academy Sci-Tech awards, especially @debfx who’s pioneering HDRI work is (finally!) being recognized, & the teams working on layered material systems (a topic dear to my heart) at Wētā FX, ILM, and Framestore! hollywoodreporter.com/movies/movie-n…
English
0
0
5
398
inigo quilez
inigo quilez@iquilezles·
@CompletedStreet Yes, San Francisco is objectively a very ugly city, certainly the ugliest I've lived in. But don't judge a book by its cover, it's beautiful in other ways that attract lots of people.
English
3
0
15
6.2K
Mark R. Brown, AICP, CNU
Mark R. Brown, AICP, CNU@CompletedStreet·
"San Francisco is so beautiful." 90% of San Francisco:
Mark R. Brown, AICP, CNU tweet media
English
757
548
18.4K
2M
Naty Hoffman retweetet
The Sting
The Sting@TheStingisBack·
The six-armed Kali in The Golden Voyage of Sinbad (1973) is one of stop-motion master Ray Harryhausen’s most complex creations. Staging a fight between six men and a six-armed creature was a logistical “nightmare” and took four months to animate. Movie-making at its magical best
English
191
1.6K
12.5K
966.5K
Naty Hoffman retweetet
Björn Ottosson - making Island Architect
I've been working with Spherical Gaussians recently as a representation for irradiance in realtime raytracing. Ended up deriving quite a few new approximations relating to SGs and diffuse lighting. Ended up with something both cheap and accurate. Link below
GIF
English
2
28
243
10K
Dan Olson
Dan Olson@olson_dan·
One of my favorite quotes. So moving.
Dan Olson tweet media
English
1
0
1
404
Naty Hoffman retweetet
Brian Karis
Brian Karis@BrianKaris·
After doing numerous talks I decided a written medium the viewer could work through in their own time could better convey low level details. I also imagined it would be less work I’m not sure this turned out better in either way, just different. I’m curious to hear your feedback.
English
3
1
45
4.8K
Naty Hoffman
Naty Hoffman@renderwonk·
@jsnnsa I think for the gameplay and world logic traditional code (whether written by a human or an AI) will work much better than an NN model.
English
0
0
1
161
jacob
jacob@jsnnsa·
One path I do expect to work: two-model architecture. One model updates game state as your step function. One projects that state to pixels (replaces existing render pipeline). ~deterministic logic, generative rendering. You get the visual fidelity of world models with the consistency games require and a much broader action space. But at that point you've rebuilt a game engine. Authored state, authored rules, AI-generated visuals. The authorship is still the hard part and it's not clear to me that a large transformer is better than just generating the code.
jacob@jsnnsa

I've written 250k+ lines of game engine code. Here's why Genie 3 isn't what people think it is: World models are something genuinely new. A third category of media we don't have a name for yet. Near-term they're too slow and expensive for consumers. But for training robots? Incredible. Simulating a million kitchen scenarios is exactly what embodied AI needs. Medium-term is where it gets interesting. Add sound generation, longer context, more control and you have something Netflix should be terrified of. Imagine exploring Westeros between seasons. Wandering the Stranger Things universe. That's a real product, and it's coming. But that's interactive storytelling. Gamers play because it's fun to get better at something. Progression systems. Mechanical mastery. Nostalgia, where things work exactly how they always worked. They sink months into a single title. Years. And here's the thing: they mostly don't care about graphics or narrative. Every single one of these motivations sits at the exact weak spot of world models. Games require determinism. Multiplayer needs every client to agree on physics, every frame. Speedrunners need frame-perfect consistency across thousands of attempts. Competitive play needs rules that don't drift. You can't have ranked when reality is probabilistic. World models are competing with passive media. Long-term, they'll probably eat the renderer. Generating pixels instead of rasterizing triangles. But game logic, systems, authored constraints? That's a different problem entirely. And one perfectly suited to codegen agents.

English
9
2
53
10.5K
Naty Hoffman
Naty Hoffman@renderwonk·
@jsnnsa @SebAaltonen Analogy to client / server gaming, where the winning model wasn’t the pure OnLive “server runs game & generates final pixels” model but a hybrid where the server does the game logic and the client does the rendering. Live NNs and NN-generated code may follow the same split.
English
0
0
0
273
Naty Hoffman
Naty Hoffman@renderwonk·
@jsnnsa @SebAaltonen The described system (especially last sentence) is still “text prompt in, playable game out” but output is NN-generated gameplay code / data + NN renderer instead of a pure NN world model. Not sure if that is the future but it makes 100x more sense than “NN generates every frame”
English
1
0
1
343
Naty Hoffman retweetet
jacob
jacob@jsnnsa·
I've written 250k+ lines of game engine code. Here's why Genie 3 isn't what people think it is: World models are something genuinely new. A third category of media we don't have a name for yet. Near-term they're too slow and expensive for consumers. But for training robots? Incredible. Simulating a million kitchen scenarios is exactly what embodied AI needs. Medium-term is where it gets interesting. Add sound generation, longer context, more control and you have something Netflix should be terrified of. Imagine exploring Westeros between seasons. Wandering the Stranger Things universe. That's a real product, and it's coming. But that's interactive storytelling. Gamers play because it's fun to get better at something. Progression systems. Mechanical mastery. Nostalgia, where things work exactly how they always worked. They sink months into a single title. Years. And here's the thing: they mostly don't care about graphics or narrative. Every single one of these motivations sits at the exact weak spot of world models. Games require determinism. Multiplayer needs every client to agree on physics, every frame. Speedrunners need frame-perfect consistency across thousands of attempts. Competitive play needs rules that don't drift. You can't have ranked when reality is probabilistic. World models are competing with passive media. Long-term, they'll probably eat the renderer. Generating pixels instead of rasterizing triangles. But game logic, systems, authored constraints? That's a different problem entirely. And one perfectly suited to codegen agents.
English
129
93
1.1K
183K
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
@just_cromer @devsterxyz iPhone and iPad both have a 80% limit. All electric cars have it too. Some laptops have it, but unfortunately, the MacBook doesn't have it without 3rd-party apps. Apple should definitely add it to the MacBook.
English
2
0
2
1K
Devster☄️
Devster☄️@devsterxyz·
My classmates really think using a laptop while it’s charging ruins the battery 😭 who is gonna tell them?
Devster☄️ tweet media
English
499
591
41.1K
3.2M