Zant

108K posts

Zant banner
Zant

Zant

@Zants

Finding myself…. ex @PlayStation Environment Artist I stream sometimes @ https://t.co/Pvlf83zmuh

Katılım Eylül 2011
1.6K Takip Edilen12.2K Takipçiler
Sabitlenmiş Tweet
Zant
Zant@Zants·
The Legend's Guild - RuneScape in Unreal Engine 5 FULL VIDEO: youtu.be/VZ5JGPw2SFU
YouTube video
YouTube
English
55
156
1K
0
Zant
Zant@Zants·
That looks even worse than frame gen, which already looks like you applied Twixtor to your game
English
0
0
1
52
Logan
Logan@Mcgillligan·
@Ashtronova They are sooo fun. @Zants got James and I into it last summer and we have made so many fun things
English
1
0
2
127
ashton 💫
ashton 💫@Ashtronova·
i got a 3d printer and the first thing i printed was an RPG
ashton 💫 tweet media
English
183
40
1.7K
34.9K
Zant
Zant@Zants·
@oliver_drk The base premise is neat I guess, I could see how something like this could help a lot with perf, but this implementation just... sucks.
English
0
0
0
22
Zant
Zant@Zants·
@oliver_drk Hi i have eyes too and I see that it it nuked the cast shadows and replaced the lighting with more of an indirect / purely bounced light look as if it it's an overcast day in that world, but it's actually not.
English
1
0
0
79
Oliver Darko
Oliver Darko@oliver_drk·
I have two functioning eyes. Sorry, I won't pretend this DLSS feature isn't impressive from a technical standpoint.
English
323
127
4.6K
574.7K
Zant
Zant@Zants·
@venompilled @Shpeshal_Nick Just trying to give them the benefit of doubt. I genuinely love DF, their content is no short of incredible. They made a very bad video here but this one event shouldn’t define them.
English
2
0
4
312
🕯️🎴9 of Thedras🪓🍂
@Zants @Shpeshal_Nick I'm not sure anyone who doesn't immediately see the technical and artistic flaws should be in positions like this. As a matter of fact It feels like we have barely anyone who does this kind of thing that actually voraciously peeks things like 1% lows and minor details.
English
2
0
5
283
Shpeshal Nick
Shpeshal Nick@Shpeshal_Nick·
I really hope I'm misunderstanding Rich here because if he's saying "We were excited and liked it, but seeing how everyone hates it, I wish we waited so we could have hated it too" is uh...not a very good look and kinda tarnishes almost everything else they've done previously?
Digital Foundry@digitalfoundry

The big DLSS 5 machine learning debate and why we should have waited before posting our first round of coverage - today's video: youtu.be/5dTTfjBAFzc

English
204
395
7.8K
292.1K
Zant
Zant@Zants·
@Shpeshal_Nick Hell, even my immediate reaciton to seeing this stuff was "whoa that's potentially really cool" until i sat and thought about it for a bit.
English
2
0
9
1K
Zant
Zant@Zants·
@Shpeshal_Nick I could be wrong but I think the implication here is that in the limited time from the showcase to making the video, they didn't fully understand the big picture implications of this, and now they do. Community sentiment and online discussion made that painfully clear.
English
2
0
65
11.7K
Ruchir
Ruchir@heyruchir·
Nvidia, AMD, and Intel will release base models that will have an extremely high level of customization. You’ll be able to adjust sliders to get your desired output. That in itself isn’t *that* difficult. I can personally see a path towards that, but it’ll just take a lot of effort. Also don’t think human artists will go away anytime soon.
English
1
0
0
20
Oliver Mackenzie
Oliver Mackenzie@oliemack·
Here's some of the DLSS 5 material we saw in the demos but didn't get a chance to film. Here I think you can see the strengths of DLSS 5 - reflections become much more attractive. Starfield doesn't have great lighting to begin with, so the differences can be profound.
English
187
55
960
161.2K
Zant
Zant@Zants·
@heyruchir @oliemack @renbry Obv not 2d games, but there's an entire gradient of games out there between stylized and realistic incorporating elements of both in different ratios. With the mountain of additional perf you can squeeze with something like this, why wouldn't these and zelda and mario use it?
English
0
0
1
24
Zant
Zant@Zants·
@heyruchir @oliemack @renbry I guess? I really dont see how game development even for stylized games doesn't start to depend on this in the future for how much extra perf it would allow you to squeeze. Idk I really appreciate your input btw! Fun thread
English
1
0
2
30
Zant
Zant@Zants·
@heyruchir @oliemack @renbry It's an interesting future to ponder for sure, if not a scary one as well. I seriously do wonder if anyone would be willing to put funding towards that kind of training. I'd imagine certain developers would be willing to but those ones will likely not have human artists anymore
English
1
0
0
14
Ruchir
Ruchir@heyruchir·
Yes, style inconsistencies will be an issue at least for the first few iterations. It will be solved w/ better models + game engine adjustments. Screenspace is enough at least for lighting. All issues like geometry outside screenspace has already been solved by game engines. The engines just need to produce a decent final image that respects shadows + lighting. They can skip ao, screenspace shadows, + a lot of other techniques that fake lighting. As for how this costs $100M+, well, I've done similar models, taking input image and adjusting it for non game related stuff. (Medical imaging + other things) It's just the time and effort required to curate the data + do all the experiments. It adds up quickly.
English
1
0
0
11
Zant
Zant@Zants·
And seriously if this is the direction we’re moving in…. I absolutely hope this could be able to be used with games that are not of a realistic style. Game development will eventually depend on this so if we’re only able to use it from realistic game, does that mean the future will only have realistic games? That would fucking suck If that’s the limit and the large model approach is the only way to make this work then maybe it’s just not worth it at all
English
1
0
1
16
Ruchir
Ruchir@heyruchir·
Screen space data is already enough. More inputs doesn't always result in better results, especially for visual outputs where noise can be obvious. In regards to the larger model, it's just better. You can't expect studios to shell out millions of dollars to train/posttrain a model especially when a 10-20% larger model that's trained on 5-10x more data will work on 99% of games. You don't expect each game to train their own upscaler. Also, constrained training inputs would have the opposite effect. You'll see more noise, especially when moving the character. Style is a concern sure but you're not going to use this for anything other than photorealistic games. Also, this model probably cost upwards of ~100M in terms of training, data curation, staff cost, + research. No game studio can afford that.
English
2
0
1
27
Zant
Zant@Zants·
I mean yeah that can become an issue if we're going to continue to approach this as a large model, trained on everything with infinite variation, every style, etc. The fact that seemingly every developer will use the same huge model is the critical flaw here, not the desire to tap more into gbuffer data. I think this needs to work more as smaller constrained models trained on each individual developer's body of work, so it has a full understanding of that style and that style alone. The fact that this thing is a 'one size fits all' is stupid. Constrained training inputs could be very powerful here. Do that + tap more into the materials etc like stated, and i'd be much more on board with this. It's clear that the current approach has its pitfalls... It's only reading screen space so that already creates some weirdness, and can't deal with objects that are offscreen but meant to appear in reflections, etc. I can't understand why this needs to be a huge model with a massive set of training data from pretty much all games, being used on all games..... that seems incredibly silly to me, but maybe someone here can help me understand why that could be a good thing? I'm not seeing it.
English
1
0
1
26
Ruchir
Ruchir@heyruchir·
@Zants @oliemack @renbry increases model complexity. if you say add g buffer data, that will make the model use much more memory + be slower. You could combine the gbuffer data into a latent representation but that might have worse results.
English
1
0
0
31
Zant
Zant@Zants·
@millennialak @sircalebhammer I realized that after tweeting that reply. The flowers on the driveway that covers the corner of image 1 fooled me
English
1
0
1
56
Caleb Hammer
Caleb Hammer@sircalebhammer·
I understand that new neighborhoods will not be mature enough to have wonderful canopy trees, duh. But why are they (basically) not planting any trees at all? I think I see four or five baby trees? A new neighborhood rarely looks good, but ones like this will NEVER look good.
bitfloorsghost@bitfloorsghost

we ruined such a good thing

English
779
270
10.4K
746.8K
Zant
Zant@Zants·
@oliemack @renbry That seems like a huge miss to me. The data is there, why not use it? Very curious to see what this looks like on my own screen during proper gameplay
English
3
0
5
622
Oliver Mackenzie
Oliver Mackenzie@oliemack·
@renbry It's kind of hard to believe but perhaps the model is just that good at inferring material characteristics from images. After all, we can infer material characteristics from colour values pretty well.
English
3
0
18
5.5K
Zant
Zant@Zants·
@BeefJerkyTrader @sircalebhammer And for what it’s worth in my experience, this is not what I have seen where I live, but the fact that it’s high desert here and we barely have any trees in town naturally probably influences them to put more trees in
English
0
0
0
35
Zant
Zant@Zants·
@WalkerJade28332 @sircalebhammer OK, yeah wow they put flowers in the driveway that we had in the corner of the screen. That’s why the top one looks so much wider FOV to me.
English
0
0
0
16