You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 2, 2024. It is now read-only.
I believe this is detailed in the supplementary A.3:
We train NeRF for 2,000 epochs, which takes 6–8 h when distributed over four NVIDIA TITAN RTX GPUs. Prior to the final joint optimization, computing the initial surface normals and light visibility from the trained NeRF takes 30 min per view on one GPU for a 16 × 32 light probe (i.e., 512 light locations). This step can be trivially parallelized because each view is processed independently. Geometry pretraining is performed for 200 epochs, which takes around 20 min on a TITAN RTX. The final joint optimization is performed for 100 epochs, which takes only 30 min on one TITAN RTX.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
No description provided.
The text was updated successfully, but these errors were encountered: