@miniapeur Research scientist (in the US). Nothing wrong with postdoc, but after you finish it, the right answer will still be research scientist (in the US).
Perhaps some of you could give me some advice. I still have time to decide, but I'm hesitating between three options after I graduate:
1. Postdoc. It would allow me to pursue very interesting research ideas that I'm passionate about, some I hope could have a wider impact. However, pay generally varies from miserable to average. It could help me get a tenure-track position, maybe..
2. Research scientist. I could work on large-scale projects, which is exciting, and the salary is generally very good. But there's much less emphasis on research and publication. It's also very competitive and I need to improve my coding.
3. Start-up. It's very thrilling, but it also seems to involve a lot of risk. I have some vague ideas, but nothing very precise at the moment..
Hot take: we need a non-neural track for the remaining few papers of this category at conferences ;). Let's also give them a badge. Here is my humble proposal.
@ejdeon This is correct; keeping color and sigma in NeRF separated has various benefits, allowing for separate architectures to learn them, or to discretize one and not the other, etc.
@SebAaltonen I'd recommend creating a procedural material, and training a network that maps a photo to the parameters. Like match.csail.mit.edu, ignore the slow differentiable / optimization part, just check the neural network initialization part. The result can be tileable and high-res.
This SIGGRAPH 2015 paper for capturing material with your phone camera (two images: flash + no flash) would be highly interesting for HypeHype. But it was running at 3 hours on 5 TFLOP/s GPU back then. Anything similar that is <10 seconds?
reality.cs.ucl.ac.uk/projects/two-s…
@ejdeon Btw. I believe the paper "Microfacet BRDF generator" [Ashikhmin et al 2000] has a Section 5 that described this coupled BRDF in more detail, so should be a more appropriate reference.
Rendering Twitter: is micro facet ggx still state of the art for real-time path tracing?
- not energy conserving
- no easy diffuse + specular
- no easy rough refraction
- formulas are complex and not intuitive (see smith shadowing)
@MilosHasan MIS for real-time rendering is still not as easy to set up, and at 60 fps for huge scenes, we need as little compute as we can. To my untrained eye, it seems that real-time path tracing has more to do with denoising than MIS. But maybe it is just me.
Rendering Twitter: how would you setup a modern path tracer?
- Do we still need delta bsdf?
- Do we still need area sampling for lights?
- is MIS still better than product sampling for simple lights?
- is it best to write the shader as evaluating incoming or outgoing radiance?
If one day I am elected to be the @siggraph paper chair (#aintgonnahappenlikenever), I'll make the true deadline 1-2 days later than the officially announced one, and only reveal that information hours before. I will be the most loved paper chair of all time.
@Xelatihy Not teaching, but you can easily find the rotation axis of a rotation matrix R (it’s the eigenvector). Then you can rotate around same axis with smaller angle.
Graphics educators: any experience teaching slerp without using quaternions?
Bonus point: how about explaining dual quadernino skinning without dual quaternions?
Link me your favorite song that meets the following criteria:
1 good to dance to
2 high-energy (tho doesn't have to be super fast)
3 Has clear beats and variety of sound (as opposed to blurry sounds and monotony)
4 Is 'dirty', but not necessarily lyrically
JUST ONE SONG PLEASE.
@Peter_shirley To be fair, they mention ray tracing and the z-buffer (but call them "brute-force object space" and "brute-force image space") and dismiss them as impractical, which they pretty much were in 1974. It's ironic that when it all shook out those were the only two left standing.