
“While I do see AI making it very simple for anyone to generate scenes to render without knowing 3D tools that AI or an LLM might target (i.e. blender), I am also seeing an equally deep and complex neural node based workflow evolve from current rendering and compositing node graphs (in blender or ITMF/ORBX core in Render). Even in pure comp work cases, comfyUI AI node graphs can be just as or more intimidating than current 2D/3D node graphs. I think this complexity is needed under the hood for max control by artists right now who push the envelope - while graphcs can be wrapped in super simple UX for push button effects (as many AI generation/filter sites do), but the best of both worlds is when there is simplificaiton around models and operators in the node system and much better integration with 3D scene rendering (where control is absolute and precise). THis is where we hit a sweey spot with Octane and Motion Graphics designers. Many like Beeple skipped the node system entirely for materials for example. I thinik the effector system and scattering tools (some of which we built into core) are ripe for augmentation and simplification/accessibility in the future with neural nodes.”
@JulesUrbach 12.03.25
$RENDER

English











