Philippe Lewicki 🛡

1.5K posts

Philippe Lewicki 🛡 banner
Philippe Lewicki 🛡

Philippe Lewicki 🛡

@philfree

CoFounder @afternowlab, AR/VR/XR Spatial Computing. AI/Cuda/HPC/nccl. Speaker real https://t.co/gqAGlkKOAu https://t.co/YIDcM2EbMV

Culver City, California Katılım Ağustos 2007
1.3K Takip Edilen518 Takipçiler
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
I’ve started building a library of 𝟯𝗗 𝗚𝗮𝘂𝘀𝘀𝗶𝗮𝗻 𝗦𝗽𝗹𝗮𝘁 𝘀𝗰𝗮𝗻𝘀 across Paris, Los Angeles, and private properties. Not just “cool 3D” — trying to validate where this creates real business value. What would you want to do with scans like these? #VirtualProduction
English
0
0
2
44
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
Be honest: Mentra glasses don't sound exciting at first glance.No display. Just a camera and mic. The magic is under the hood. They're built on a rock-solid, tested Open Source framework. This makes them the most customizable and versatile AI glasses on the market.
English
0
0
1
40
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
The G1 set the bar for lightness and style. Now, how much better is the G2? Even Realities is raising the stakes again. Check out the unboxing of G2 and R1 below to see what's new. Follow along for the full review coming soon!
English
0
0
0
214
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
The future is on your face.Over a dozen AI/AR glasses launched last year. I didn't just unbox them. I wore them for weeks.Why? Because you can't review a pair of glasses for just one day..Follow me if you want the honest verdict before you drop your cash.
English
0
0
0
40
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
@VRDesktop On my last LBE retail experience using mixed reality I started with the openXR / unity 6 but had to move to the Meta sdk because of shader not rendering right in passthrough
English
0
0
0
252
Guy Godin
Guy Godin@VRDesktop·
VR Developers, don’t use the Meta XR SDK to develop your games. Whether you use Unity or Unreal, you should use the built-in OpenXR support in the engines. The Meta XR SDK only works with Quest & Link; it is a buggy mess hardcoded to not work with other runtimes and headsets.
English
40
82
774
49.9K
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
@SadlyItsBradley Yes, you can test the difference with Even reality or the new rokid glasses. They compromised the color for binoculars
English
0
0
0
160
Brad Lynch
Brad Lynch@SadlyItsBradley·
I’m now convinced that smart glasses with only a monocular display cannot be mass market anytime soon I think with current technology: people are way better off without a display rather than the tradeoffs of adding a single one and I don’t expect Apple or Google to solve it
Brad Lynch@SadlyItsBradley

Punch me in the face

English
57
14
443
67.1K
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
@Snosixtytwo Are the players animated with paise detections and 3d rig avatars? And the performance gain is on the parallel processing of multiple players?
English
0
0
0
37
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
Excited to return at AWE this year. I will be part of a panel about the award winning TMobile Retail experience for Formula 1. We will how we built and deliver a successful immersive location base retail experience: awexr.com/usa-2025/agend…
Philippe Lewicki 🛡 tweet media
English
0
0
0
49
Lucas Rizzotto
Lucas Rizzotto@_LucasRizzotto·
The AI slop world we're headed into will spark a new golden age for documentaries. If everything around you is fake, suddenly anything grounded in reality will have 100x the weight.
English
8
1
55
2.8K
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
@marc_habermann This looks great. I have a question: All the renders have a specific focal length and camera angle. Is it just for consistency, or did you experience artifacts in lower view angles of the rendered avatars?
English
0
0
0
46
Marc Habermann
Marc Habermann@marc_habermann·
Want to have explicit control over skeletal pose, facial expressions, and hand gestures for your photorealistic virtual avatar that can be learned from multi-view video? Check out our #Siggraph2025 work, EVA, which presents an expressive, full-body, and photorealistic avatar.
English
5
29
175
12.5K
Marc Habermann
Marc Habermann@marc_habermann·
@janusch_patas We will soon release source code and data :) concerning skeleton and mesh: our representation comes with both.
English
2
0
3
300
MrNeRF
MrNeRF@janusch_patas·
[SIGGRAPH '25] EVA: Expressive Virtual Avatars from Multi-view Videos Contributions: 1. We introduce EVA, a novel method enabling full-body control with real-time, photo-realistic renderings, robustly handling loose clothing dynamics and various facial expressions. 2. We develop an expressive deformable template that generates a deformable human template mesh and employs a multi-stage tracking algorithm to faithfully capture facial expressions, body motions, and non-rigid deformations from multi-view videos. 3. We propose a disentangled 3D Gaussian appearance module that models the body and face independently, ensuring separated control and high-quality renderings.
English
9
49
340
18.4K
Philippe Lewicki 🛡
Philippe Lewicki 🛡@philfree·
@benz145 Yes, the same. It's a weird video. I was hoping to get details on the tech they will do, but they just talked about themselves.
English
0
0
2
17
Ben Lang
Ben Lang@benz145·
Anyone else find this ‘make our own documentary about ourselves’ bromantic marketing… weird?
English
24
0
85
8.2K