
For the facial animations in my game, I use a technique that yields results similar to L.A. Noire. To keep it simple: I first generate a facial animation video using AI like LTX 2. Then, I create a video depth map from that animation and project it onto a face mask. Using vertex displacement, the depth map dynamically deforms the face as the character speaks. The result looks good in standalone VR. I actually created this workflow long time ago: x.com/alexfredo87/st…


















