Alex Goldring

394 posts

Alex Goldring

Alex Goldring

@SoftEngineer

Software developer. Game developer. Graphics researcher. Working primarily in 3d graphics on the web.

Entrou em Ağustos 2009
32 Seguindo1.9K Seguidores
Tweet fixado
Alex Goldring
Alex Goldring@SoftEngineer·
Sponza scene, in Shade (WebGPU graphics engine). Was working on mipmap texture filtering. It's one of those things that can be a time black hole, but when we can improve image clarity - I can't help but feel like it's worth it. I guess my mipmap generation system in Shade is the most sophisticated mipmap system on the web. Perhaps a bit of an overkill but I'm super happy with the results.
Alex Goldring tweet mediaAlex Goldring tweet media
English
3
2
50
3.8K
Ignacio Castaño
Ignacio Castaño@castano·
@SoftEngineer @HypeHypeInc @SebAaltonen Also, the GLTFLoader integration already exists: import { registerSparkLoader } from "@ludicon/spark.js/three-gltf"; registerSparkLoader(loader, spark) Register your spark instance and the loader performs transcoding automatically.
English
1
0
1
60
Alex Goldring
Alex Goldring@SoftEngineer·
Sponza scene, in Shade (WebGPU graphics engine). Was working on mipmap texture filtering. It's one of those things that can be a time black hole, but when we can improve image clarity - I can't help but feel like it's worth it. I guess my mipmap generation system in Shade is the most sophisticated mipmap system on the web. Perhaps a bit of an overkill but I'm super happy with the results.
Alex Goldring tweet mediaAlex Goldring tweet media
English
3
2
50
3.8K
Meetem
Meetem@Meetem4·
@SoftEngineer Ah, Ive got it. I did something similar for Unity implementing NICE filter on gpu. Constructing a little smaller kernel and take max(currentMip-3, 0) as a source
English
1
0
0
38
Alex Goldring
Alex Goldring@SoftEngineer·
On the mipmap generation? Well, there are a few things here. In real-time scenarios you're typically forced to run small filter kernels, like 5x5 is already large. A typically linear mipmap (basic standard) is 2x2 kernel. When we generate mipmaps offline - we can always start from mip0 (highest resolution), and run progressively larger kernels all the way. You get best results this way, but it's slow. And it doesn't scale for GPUs. If you just run recursive filter, that is filter mip0 to mip1, then use the same filter on mip1 to get mip2 etc. You get a pretty blurry result, because at each stage you're destroying some information. So with all that - my mipmap generator cheats, instead of going all the way to mip0 which is too expensive, it goes back a few mip levels, where you still have more detail and information. The kernel gets larger, but in acceptable range. I also don't use the basic linear filter. This, however, creates a conundrum where you want to run a complex filter, but not recursively. So my mechanism is split into optional forward and backward passes. Backward pass is optional, it only kicks in when needed by the requested filter. To maybe better illustrate the system, here's a processing spec for mitchell filter: --- [TextureFilterType.Mitchell]: FilterProcessDescriptor.from({ filter: FilterShaderType.Mitchell, skip_distance: 2, base_filter: FilterShaderType.MagicKernelBase, }) --- Beyond this, the mipmap generator has a scheduler, where it runs a bit of the mipmap workload at a time each frame, which prevents the main thread from locking up. There's a fair bit of complexity related to memory management and keeping drawcall overhead to the minimum.
English
1
0
1
100
Alex Goldring
Alex Goldring@SoftEngineer·
I can relate, it's the (not-)Dunning-Kruger effect. I got into 3d graphics very informally around 2013. I wasn't working in graphics industry, it was just one-off prototype in WebGL at work. Everything I know I got online from various lectures, papers and open-source projects. That is to say - knowledge didn't come easy for me and it didn't come fast. The complexity that goes into modern 3d graphics is awe-inspiring, so I can understand when people get mistaken or confused. I'd like to think that this is a learning process and a bit of hormones. With time that person will learn and improve, both in terms of their knowledge as well as behavior. And if not? - I don't have a whole lot to be upset about, their own situation is sad enough 😅 As hard as graphics is - I'm still amazed at how open the industry is with respect to sharing.
Alex Goldring tweet media
Łukasz Bogaczyński@_woookie_

@SoftEngineer There's nothing wrong with lack of knowledge, it's just people have gotten more and more aggressive/ragebaite'y/false confident in what they're saying online. Folks with 10s of years of experience don't talk with so much confidence and I've had enough of ragebaiting honestly.

English
1
1
20
2.3K
Alex Goldring
Alex Goldring@SoftEngineer·
Ha, I followed your progress quite closely actually. First saw your runtime texture conversion tooling mentioned in a REAC presentation by @HypeHypeInc's @SebAaltonen . I think your work is amazing in terms of results, the packaging seems solid too having had a brief look at the API. Why I don't see myself using is the fact that the licensing is not super clear to me. I'm developing an engine, so it's not obvious what route I could go. You have a term: " The Software may not be distributed as part of a game engine, middleware, or developer toolset. " Integrating the tool just to have the user need to obtain a separate license is too awkward. Especially for smaller users. Why I don't think spark would appeal to a broader audience: Texture size matters. It matters a whole lot for large projects pushing texture budgets. Web projects, at least so far, tend to be quite small, instead of hundreds of textures - we're talking about tens. Does compression help? - sure, it makes things run a little better, but for most users they aren't chasing the last few % of performance, they have much larger leavers to pull. Next is the runtime nature. If your engine generates or composes many textures internal, such as in Virtual Texturing scenario for example - your tool will be incredibly helpful. Most use-cases don't fall into that category, so offline tooling is just as good. What I think might help would be some kind of turn-key solution where you, say, wrap a GLTF loader of three.js with some decorator and all your textures get compressed as the GLTF loads making it into an automagic process. Most users will not want to care how the sausage is made. Most users don't even fully understand what a texture is and what the structure of a scene looks like at a low level. Coming back to my engine. I looked at Spark.js for the first time and though "that would be an awesome feature for my engine", and considered investing time to bring part of such functionality into the engine from scratch. So that is to say - I think at an engineering level it's attractive, if the licensing friction can be solved that is. Anyway, I know I've been harsh, but I hope this was useful 🙇‍♀️ I truly love your work and have a great amount of respect for what you do.
English
2
0
1
60
Ignacio Castaño
Ignacio Castaño@castano·
@SoftEngineer BTW, have you tried using spark.js? I would love to hear your feedback, and would love to see it used in other demos in addition to my own :)
English
1
0
0
121
Alex Goldring
Alex Goldring@SoftEngineer·
Sadly no. I've ran into basically the same problem and I put together a cleaned up version of Sponza for myself some 6-7 years ago. It's not a thorough job, I got decent textures like you and tweaked material settings by hand in GLTF using notepad++ 😅 I thought I might publish the result for others to use, but I was sure nobody would care, story of my life "surely this is trivial". I'm sure I'll be reaching for this version in the future and I'm sure many people will benefit from your work, thanks Ignacio.
English
0
0
0
114
Alex Goldring
Alex Goldring@SoftEngineer·
@castano @jcostella @BartWronsk Thanks, I honestly don't think I would find out about it anytime soon if it wasn't for your article. @jcostella goes into a frightening amount of detail on the subject, but it makes perfect sense.
English
0
0
0
69
Ignacio Castaño
Ignacio Castaño@castano·
@SoftEngineer @jcostella @BartWronsk I was surprisingly pleased with the MKS results, but I've got to do some side-by-side comparisons, and also try different kernel sizes. Glad to hear it's working well for you!
English
1
0
1
159
Alex Goldring
Alex Goldring@SoftEngineer·
Spent some time on mipmap filters. Implemented MKS and Wronski kernels. MKS is 2021 variant of MagicKernelSharp by @jcostella Wronski is @BartWronsk's 10 tap kernel from his 2021 writeup: bartwronski.com/2021/07/20/pro… MKS is recursive. Mitchell and Wronski are backtracking with linear pre-pass and mipmap step of 2. Thanks to @castano for indirectly pointing MKS out to me by sending me a link to his article: ludicon.com/castano/blog/2… I switched Shade to use MKS as a default for color textures. Used to be Mitchell, MKS is a bit softer but it does well at preserving detail and importantly - removes aliasing and ringing. PS: @BartWronsk that was a fascinating read, and I'm sure I messed up your posted kernel in some way. I'm running this in a fragment shader from texel center, so I adjusted distances: --- const SAMPLE_DISTANCES = array( -5.198, -3.151, -1.331, 1.331, 3.151, 5.198 ); const SAMPLE_WEIGHTS = array( 0.115, -0.304, 0.689, 0.689, -0.304, 0.115 );
Alex Goldring tweet mediaAlex Goldring tweet mediaAlex Goldring tweet mediaAlex Goldring tweet media
English
1
1
13
1.2K
Alex Goldring
Alex Goldring@SoftEngineer·
@chena_cpp No, nothing specifically on shadows in that talk. You might want to check out Remedy's talk on Alan Wake 2 rendering. That one goes into a fair bit of detail on the subject.
English
0
0
1
35
chena
chena@chena_cpp·
@SoftEngineer Did any of the talks mention shadow culling? Thanks.
English
1
0
1
72
Alex Goldring
Alex Goldring@SoftEngineer·
Doom Etarnal and its predecessor were ultra-optimized towards light counts. There are great presentations for both from SIGGRAPH. Elernal more-so. Makes sense that the team would push it as far as it goes. As for TDA - I guess part of it is the open-world(-ish) nature of the game. Unless your game is super-duper dark, it's actually really hard to make good use of local lighting in outdoor scenes. If you take a good-looking open-world game, they typically find ways to cover up the sky to create good environment for varied lighting. Such as a dense forest with occasional clearings and tall brush. Other games lean into making the landscape look beautiful, but that's not a lighting thing anymore. If you have walls and a ceiling - it gives you so many options for lighting fixtures. As humans we know that the sun is really freaking bright, so any light sources outside can't be too bright or they will look fake. Sad times. So yeah, you have to tone down main directional light source and make it overcast, push towards twilight and night as far as possible. Most games that have a day/night cycle cheat by making sunrise and sunset last several times longer proportionally than in real-life, because lighting is more interesting during those times.
krowan@krowan_47

@SoftEngineer damn, you buried my point. My guess is that, when I wrote it, I was comparing it to Eternal in my mind. There was light everywhere, making the details stand out. This is not happening in TDA, and the details are being kept in the dark, that's maybe why I don't appreciate the game

English
1
0
14
2.6K
Alex Goldring
Alex Goldring@SoftEngineer·
Often a game comes out with beautiful graphics that runs like a dream. But it fails, because the game is just not fun. Often it's the other way around. Great performance and graphical fidelity certainly helps a game succeed, but the two aspects are not entirely related. A great graphics engineering team isn't going to have much of a say on gameplay. They can enable certain dreams of the gameplay design team, but it's an entirely different skillset. In fact - us, graphics engineers, tend to be a pretty boring bunch, scientists and artist are just different kinds of people. Sometimes a person will be both, but rarely. A recent example that comes to mind that had pretty amazing graphics team and looked beautiful but didn't succeed due to poor gameplay is "Calisto Protocol". It's often cited in graphics research circles, but the Metacritic user score sits at a painful 68%. Conversely if a game with mediocre graphics tech does well - the graphics team will have higher chances of being invited to prestigious conferences to speak, such as SIGGRAPH. This is a typical "survivorship bias", where if you succeed overall - you're automatically great in every aspect, and if you fail overall - you're terrible in every aspect.
krowan@krowan_47

@AncientGameplay @SoftEngineer @idSoftware @DOOM and it took longer to develop than Eternal even tho devs didnt have to bake the lighting, cubemaps and so on. I wish the gameplay at least saved this game, sadly that didnt happen.

English
3
1
29
3.7K
Alex Goldring
Alex Goldring@SoftEngineer·
Ha, I guess I'm just thick-skinned. I'd like to think that most of this type of criticism is at least in part due to lack of knowledge, and I choose to engage with it as such. Some people are just out to get a reaction or spill their unhappiness onto someone else. Internet, am I right?🥲 Either way, don't take it to heart, the spring is here, weather's getting better and better. Graphics scene is the best it's ever been, so much exciting new tech coming out!
English
1
0
1
170
Łukasz Bogaczyński
Łukasz Bogaczyński@_woookie_·
@SoftEngineer A guy there was complaining that TDA runs worse than Eternal on the same HW. Yeah genius, did you try to compare the size of the levels? The amount of enemies? The destructible environment? Like I said - waste of time.
English
1
0
3
175
Alex Goldring
Alex Goldring@SoftEngineer·
@_woookie_ I'd like to think that it's a good educational opportunity. Most gamers aren't graphics engineers, and they do wonder. One of my best friends often asks questions like these, less pointedly though 😅
English
0
0
3
106
Łukasz Bogaczyński
Łukasz Bogaczyński@_woookie_·
@SoftEngineer At this point I'm no longer engaging in this ragebait type of tweets as the cited one. Waste of time
English
3
0
2
293
Alex Goldring
Alex Goldring@SoftEngineer·
About "But we do lower the frequency of shading certainly where you don’t notice." - I did think that the foveated bit was great. Definitelly going to steal that idea if I ever get around to VRCS for my own engine. Do you happen to have a link to the engineering sample that was mentioned in the presentation per-chance?
English
1
0
2
62
Alex Goldring
Alex Goldring@SoftEngineer·
Hey Martin, loved it actually. I did find your dislike of noise somewhat amusing😅. Yeah, I did like the part that you guys did depth (viz) at full resolution, and then feed it as one of the inputs to compute RSI. In terms of quality I think that's a really good call. I find it a bit surprising how many people rail against "fake pixels", and @DOOM The Dark Ages received high praise for "clean" image at high FPS even on lower-end devices. After watching your presentation - I thought I'd chip in on behalf of fake pixels. As we move to larger screens and higher DPI, as well as higher and higher framerates - something has to give, and computing fewer pixels totally makes sense to me. Thanks again for the talk, it was really easy to follow and had an excellent pace, not too fast and not too slow
English
2
0
1
225
Alex Goldring
Alex Goldring@SoftEngineer·
Watched a brilliant presentation from @idSoftware guys at GPC on @DOOM "The Dark Ages". 11 out of 12 pixels are fake. Say your screen is 4k (3840 x 2160), here's what the game does: DLSS/FSR in "Balanced" mode upscales from: 2227 x 1253 -> 3840 x 2160 (your full 4k) Before DLSS gets the image, it is upscaled by DRS (Dynamic Resolution Scaling) internally from, say 75% resolution: 1670 x 939 -> 2227 x 1253 Before that, the image is rendered with VRCS (Variable Rate Shading) with ~43% of pixels actually rendered and the rest just filled in. So in total we go from 674k pixels actually rendered by the engine, to 1,568k after VRS to 2,790k after DRS to 8,294k after DLSS For a grand total of about 1 in 12 pixels actually being rendered, the rest? - they are fake. source: youtube.com/watch?v=mvCoqC… as for me - I'm not against the fake pixels, bring them on I say, as long as it's done well.
YouTube video
YouTube
English
9
42
323
34.4K
Alex Goldring
Alex Goldring@SoftEngineer·
@DeanoC I can agree with that. Some people draw a large distinction between AI techniques and everything else. I don't. To me they are the same in the end.
English
0
0
1
62
Deano Calver
Deano Calver@DeanoC·
@SoftEngineer VRS is just LOD. It's not creating any pixels, just choosing not to shade some of them. It's literally the same as rendering at low resolution and upscaling, except it's applied locally to small areas. It's no more fake than rendering any other pixel.
English
1
0
0
108
Alex Goldring
Alex Goldring@SoftEngineer·
"Variable Rate Shading is not fake pixels" is the argument. It's an interesting take, but if you believe that TAAU is fake pixel or DLSS is fake pixels - you have to accept that VRS is too. VRS works by splitting the screen into tiny squares(tiles) of, say 2x2 pixels, and then assigning a pattern to that tile, we either shade 1 of the 4 pixels and fill in the rest, 2 of the 4 or all 4 where essentially the technique is disabled. There are other ways VRS can work, and the term itself is just the approach - implementations vary. These tiles are written into an image called SRI (Shading Rate Image), typically r8uint. We can argue whether filling in blank pixels that you don't render is "fake" or not. Personally I say yes, they are fake. But there's a more pressing issue - what do you use to build SRI? You can't use this frame's pixels because they haven't been produced yet, after all - you're building the SRI to tell you what to shade for this frame. So, you typically rely on the last frame's data. But last frame is not this frame. This is the same exact problem that TAA faces. If you have disocclusion, or any significant motion in the frame, or something like a bright flash from a gun muzzle or an explosion - your last frame data is going to be wrong. In a sense - because of this last frame dependency, VRS becomes a temporal technique where we predict the present state based on the past. Let's say the frame is fully static, it's not a typical scenario, but let's try to see if VRS is only skipping pixels that would "look the same anyway". Let's focus on a 1x4 tile, where only 1 pixel is drawn out of 4. When we draw the next frame - we're going to be missing data. The last frame was drawn with 1x4, so how can we judge if the SRI we're creating from it is correct or not? - we can't. If there is a tiny bit of gradient in that tile - we're going to lose it, it will turn into a uniformly colored square. So - we have to rotate the pattern each frame to make sure that the 1 pixel we actually draw falls into a different cell of the tile, in the sense - doing what TAA does. Next is the "look the same" issue. If you se up VRS to only skip the "same"-looking pixels, it will be practically useless. Very rarely in a realistically-lit scenario will you have truly identical pixels in a 2x2 square. So - we have to use a threshold. Not "same" but "similar" with some metric representing the threshold. If you want your VRS to be helpful - you need to tune this metric to discard at least 10% of the pixels on average. If you want your solution to be really good at improving performance, you'd be aiming for 30% and above. Notice that we're not talking about how similar the pixels look, but we're saying that we'll accept sacrifices and force the "similar" definition based on our performance target. And so - more useful the technique will be, more "fake" those extra pixels will be. Lastly, if you just go by the naive and intuitive logic of filling in skipped pixels - you will get blocky artifacts. Even when pixels are similar, or perhaps - especially on smooth backgrounds. So you have to cheat and interpolate across tile boundaries. This removes the artifacts - but you're now definitely cheating because the neighboring tile was never guaranteed to be "similar". And so - VRS is amazing, and I hope to one day implement it in Shade, but it's not magic and it's not "free compression".
James Stanard@JamesStanard

@SoftEngineer @idSoftware @DOOM For the variable rate shading, it’s not that the pixels are “fake”. It’s that they reuse shading computations for neighboring pixels that are going to look the same anyway. You can call the upscaling fake pixels, but it’s common practice.

English
4
0
44
6.5K