Anton Matrosov

169 posts

Anton Matrosov banner
Anton Matrosov

Anton Matrosov

@antmatrosov

PlayCanvas • 3D Web Experiences • VR / AR Applications • Digital twins • 3DGS

Dubai, UAE Katılım Nisan 2023
71 Takip Edilen150 Takipçiler
lana
lana@lanamolx·
@antmatrosov Unlimited plan for seeance 2 impossible!! Where?
English
1
0
0
36
Anton Matrosov
Anton Matrosov@antmatrosov·
@lanamolx This is niche technical exercise to generate a small explorable 3D world from a single starting image. It's a hot research topic right now, so I've tried my hands at it as well 👌
English
0
0
0
196
lana
lana@lanamolx·
@antmatrosov Can’t understand how this will be helpful what u did exactly and why
English
1
0
1
292
Anton Matrosov
Anton Matrosov@antmatrosov·
@Max_Eskandari I'm pretty bad with 3D modelling, so I've used a simpler image-to-3DGS pipeline as the first iteration (huggingface.co/spaces/gagndee…), then designed my camera path there, applied position-to-color shader + placed dots planes manually, and it was done
English
0
0
1
36
Max Eskandari
Max Eskandari@Max_Eskandari·
@antmatrosov This is mind-blowing! Would you mind explaining the motion reference part and how you got that a bit more?
English
3
0
1
366
Anton Matrosov
Anton Matrosov@antmatrosov·
@Max_Eskandari So you have to create a very crude 3D representation of your scene first -> then design a camera path -> render it into 15 sec video -> feed that into Seedance -> enjoy the result :)
English
0
0
1
24
Anton Matrosov
Anton Matrosov@antmatrosov·
@Max_Eskandari Sure! Originally motion references appear in Runway's guide to Seedance: shorturl.at/2CLlc Simple motions like orbiting can be done with abstract 3D cubes and stuff, but 15-second flyarounds between particular points of your scene are tricky
English
2
0
4
322
Aleksandr
Aleksandr@_trueuser_·
@antmatrosov Do I understand correctly that you extracted 3D from the generated video?
English
1
0
0
378
Anton Matrosov
Anton Matrosov@antmatrosov·
And finally, you render your reference, pass it to the model, and hope and pray it will actually respect it and give you a somewhat decent result before you spend all you savings on this slot machine hehe [3/3]
English
0
0
0
102
Anton Matrosov
Anton Matrosov@antmatrosov·
🔸The most challenging part is preparing a motion reference video. First, you need to come up with some initial 3D representation of your scene, even before you have it in 3DGS. I've used ML Sharp for this + custom splat shader + dots overlay to improve motion tracking [1/3]
English
2
1
5
847
Anton Matrosov
Anton Matrosov@antmatrosov·
Then you design a camera path that covers your scene the best way possible, from all angles, in under 15 sec. Realistically you can squeeze in around ~30 distinct view points, and then connect them with some spline path [2/3]
English
0
0
3
726
Anton Matrosov
Anton Matrosov@antmatrosov·
🔸Seedance 2.0 is capped at 15 seconds, which gives you only about 360 frames to work with. For this task, you will need to use the Omni model, which can take both images and videos as input, along with a text prompt describing how those inputs should be used
English
0
1
2
841
Anton Matrosov
Anton Matrosov@antmatrosov·
@JulienReszka When I first saw results of my head tracking, I've asked myself the same thing!
English
0
0
0
0
Anton Matrosov
Anton Matrosov@antmatrosov·
@the8thwall @playcanvas 4⃣: ?mode = all Enable both World and Head Tracking on your device. By default Head Tracking is enabled only on laptops and tablets, and World Tracking - on tablets and mobiles
English
0
0
0
198
Anton Matrosov
Anton Matrosov@antmatrosov·
@the8thwall @playcanvas 2⃣: ?blur or ?faceblur = true | 0 ... 3. Enable it if you're shy and want to record a video, but with your face blurred out. Alternatively, you can always close face cam with a dedicated close button 3⃣: ?fs or ?fullscreen = true. Enable full screen (requires tap / click)
English
1
0
0
219