
KCD Modding Community
756 posts

KCD Modding Community
@KCDModding
#KingdomComeDeliverance modding community (UNOFFICIAL) https://t.co/N43e8LFdRJ





Baldur's Gate 3 and Clair Obscur: Expedition 33 may make Kingdom Come: Deliverance 2 look like the "small kids in the playground," but star hopes it'll "live in people's memory" anyway gamesradar.com/games/rpg/bald…


Join us for the global reveal of #METRO2039 16 April 2026 10AM PDT | 7PM CEST | 8PM EET METRO2039.com









Vitality Naymushin (@rawkstarv) the character artist behind Jonesy, Ramirez, Penny, and Kyle was fired today. artstation.com/rawkstar









After this whole debate about DLSS 5 I came to the conclusion that most of the people talking about it are completely unaware of what they don't know...they're on the peak of ignorance and don't even grasp how little they understand. They just heard generative AI and like Pavlov's dog they just start drooling thinking it's the same shit as unethical slop image generators...for the love of Christ...go and educate yourself before raging on the internet for no reason. DLLS 5 is not a prompt based generator...it's not creating stuff based on someone else's images and hallucinates results. It uses the information from the raster to build up a final render frame with the same information but with better lighting and shading... I'll even give you an example on how much of an impact better shading and lighting has. This is a character I've worked on not long ago. On the left you have a raster render, with some bad shaders. On the right you have a render with raytrace on, a much better shader for both hair and skin. They don't even look like the same person...do they? This is what DLSS5 is doing....getting a result like the one on the right(tbh a lot better) at a smaller cost than actually rendering it. Still the same geo, same textures, same light sources. Some of you will go and say the one on the left is better and it's the artist's vision. It's not...it's just the artist's limitation due to shading and lighting constrains. Every single artist out there would love to get the right result in real time.


After this whole debate about DLSS 5 I came to the conclusion that most of the people talking about it are completely unaware of what they don't know...they're on the peak of ignorance and don't even grasp how little they understand. They just heard generative AI and like Pavlov's dog they just start drooling thinking it's the same shit as unethical slop image generators...for the love of Christ...go and educate yourself before raging on the internet for no reason. DLLS 5 is not a prompt based generator...it's not creating stuff based on someone else's images and hallucinates results. It uses the information from the raster to build up a final render frame with the same information but with better lighting and shading... I'll even give you an example on how much of an impact better shading and lighting has. This is a character I've worked on not long ago. On the left you have a raster render, with some bad shaders. On the right you have a render with raytrace on, a much better shader for both hair and skin. They don't even look like the same person...do they? This is what DLSS5 is doing....getting a result like the one on the right(tbh a lot better) at a smaller cost than actually rendering it. Still the same geo, same textures, same light sources. Some of you will go and say the one on the left is better and it's the artist's vision. It's not...it's just the artist's limitation due to shading and lighting constrains. Every single artist out there would love to get the right result in real time.


The big DLSS 5 machine learning debate and why we should have waited before posting our first round of coverage - today's video: youtu.be/5dTTfjBAFzc




What I really want from DLSS5:





I paint and design characters and environments. No genAI, just my hands and brain 🧑🎨





I imagine developers will have quite a difficult time implementing "neural rendering" if it becomes the norm. Take for example this; DLSS 5 currently uses gen-AI at a geometry level which yields these results as shown in the picture. When AMD comes up with their own "neural rendering" solution (they're already working on it) for PC and consoles too, the generative AI won't look the same as the one from Nvidia. Each developer will have to work with a different set of upscales that will have different result, as such no game will look the same, especially small facial features. Grace in this case here, will look totally different depending on where you're playing the game at; next-gen Xbox, PS6, AMD GPU, Intel GPU or Nvidia GPU.





