(Alex) Compositing Academy

682 posts

(Alex) Compositing Academy banner
(Alex) Compositing Academy

(Alex) Compositing Academy

@CompAcademyVFX

Senior Compositor / Generalist / Filmmaker. Founder of https://t.co/zTSIZ1qrOk Worked on Avengers / Spiderverse / Avatar / Starwars Follow on Youtube👇

가입일 Kasım 2021
772 팔로잉4.5K 팔로워
고정된 트윗
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
Wip. Directing a full sequence for this. VFX are going to be insane
English
4
8
327
31.1K
DiscussingFilm
DiscussingFilm@DiscussingFilm·
The new trailer for Steven Spielberg's ‘DISCLOSURE DAY’ has been released. The film follows the disclosure to the world that aliens might be real. In theaters on June 12.
English
339
2.7K
19.5K
1.3M
Synth Potato🥔
Synth Potato🥔@SynthPotato·
The way Telltale’s The Walking Dead did reflections still blows my mind. Game devs lives were tough before ray-traced reflections 😭
English
148
909
35.2K
3.2M
GMUNK
GMUNK@gmunk·
Sharing our computer vision system design for TRON: Ares. We approached these shots from the perspective of the machine. The frame becomes a field of observation. Every pixel is potential information waiting to be interpreted. The process begins by squarepacking the live action plates, dividing the image into a computational lattice. Once the grid exists, the system begins to read. Targets emerge. Motion vectors form. Signals ripple through layers of analysis. Inside these overlays lives the source code of the system itself. Streams of logic, fragments of instruction, the internal language of the machine. At times the imagery dissolves directly into that code, as if the world itself is being rewritten from within. Visual information collapses into structure, then reconstructs again as data. Scanning pulses move across the frame. Timers react. Coordinates lock. Each layer is part of a larger ecosystem of perception, constantly processing, evaluating, and iterating. The goal was maximum detail and living systems. Interfaces that breathe, think, and evolve in real time.
English
31
125
945
33.8K
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
@n0l_kc This is generating normals in Beeble, but then using a custom nuke setup to get 360 reflections from a virtual scene. Currently working on making the reflections spatially accruate using a variety of techniques as well, instead of reflecting a perfect 360 sphere
English
0
0
1
37
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
Testing a new A.I / Hybrid VFX re-lighting workflow for a short film I'm directing. The scene involves a person traveling on a vehicle through a scene quickly, so I needed interactive light on the person to match the environment it's driving through. This is rendering a 360 Capture of the emissive lights in the virtual scene and applying to generated normals. Light falloff isn't there by default so this is only a starting point, but the workflow is very interesting.
English
5
14
186
6.8K
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
@The7Seraphs Diffuse direct on the left, diffuse color on the right. The ground is a hidden view layer, so it should not be appearing at all. This causes dark or bright edges when you premultiply things back together in various places
(Alex) Compositing Academy tweet media
English
1
0
1
83
Wyatt Hall
Wyatt Hall@The7Seraphs·
@CompAcademyVFX I'm a little confused what you mean by Eevee showing other render layers in the Diffuse Direct pass. I believe that could be a big. Could you share an example?
English
1
0
0
30
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
Turns out blender's eevee is very bad for multi-pass AOV compositing for a variety of reasons, o_o
English
1
0
24
3.1K
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
“The interface isn’t the problem anymore, the model is”. No actually, the interface, and lack of control is the problem. If engineers talked to creatives while building the thing maybe we could get something interesting. Every a.i company just copying eachother instead of solving the hard issues: intent and iteration, and text in a box is a terrible way to control an image if you know what you want.
English
2
2
11
1.1K
LTX-2
LTX-2@ltx_model·
If the engine is strong enough, you should be able to build real products on top of it. That's the whole point of LTX-2.3. Introducing LTX Desktop. A fully local, open-source video editor running directly on the LTX engine, optimized for NVIDIA GPUs and compatible hardware.
English
104
207
2K
949.9K
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
@michae1becker yeah probably. It’s weird that AOVs are even supported considering these factors. But, still nice to have eevee for quick additional renders in any case. Was initially hoping to treat the environment more like an unreal render and then cycles any hero elements separately 🤷‍♂️
English
1
0
1
40
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
@Joey_Wittmann As long as my 3090 doesn’t burn out rendering every night… because finding a replacement GPU right now feels like trying to buy toilet paper in March 2020.
English
1
0
3
60
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
I usually like mixing eevee with cycles because it saves on render time and is better than re-lighting. This time I tried to render a full environment in eevee - but considering these significant problems, I'll probably have to ditch it for larger applications.
English
2
0
5
684
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
unforunate discoveries when rendering eevee: -motion blur in AOVs don't match because it's not "true" motion blur, it's a post effect PER aov. Which if you render over transparent, and re-assemble your layers, it gives you dark edges everywhere. -if you use render layers to separate objects and hide a render layer, sometimes eevee shows this other layer still in the Diffuse Direct (maybe this is necessay for rendering fast?). Problem is this means when you premultiply everything back together, you get dark or bright edges where the hidden objects overlap.
English
4
0
9
803
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
New advanced re-lighting toolset, which pairs well with generated normals + depth maps. Coming soon
English
1
19
178
7.8K
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
Yep, you can do it in 2D and in many shots it’ll be totally fine. This approach is just cleaner because the blur is real and the mattes are baked in scene space, which matters once you have lots of overlap and heavy motion blur. The motion vector pass only stores one 2D motion vector per pixel for the surface the renderer saw at that point. Tools like Vectorblur use that pass to estimate blur after the fact, but they can’t recover depth or multiple overlapping motions. On shots where objects cross, overlap, or contribute to the same motion-blurred pixel, that assumption breaks down. It can look smoother visually, but the grade isn’t physically consistent because the blur isn’t being applied in true scene space, which can lead to edge problems.
English
0
0
1
46
(Alex) Compositing Academy
(Alex) Compositing Academy@CompAcademyVFX·
Making a layered color grade matte system in Blender (like cryptomattes, but for grading different parts of a scene without edge problems)
English
2
5
186
9.4K