You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
In my project, I can only get the points on the surface of the object. At present, I have completed the training with ShapeNet's data.
During inference, I made the following changes: I firstly pre-processed ShapeNet's test set (surface samples) to get .ply files. Because these points have 0 SDF value, I just wrote the coordinates of these points and the 0 SDF value of into . npz files. Then I used the new generated .npz file to learn latent code during inference time. But in this way, the reconstructed mesh seemed not similar to ground truth.
I wonder if there is anything unreasonable about what I did?
The text was updated successfully, but these errors were encountered:
This is a very clever idea.
However, I think this will lose the scalability of the function. Using only zero values to reverse the latent vector will make the function lose the learning of external value expansion
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In my project, I can only get the points on the surface of the object. At present, I have completed the training with ShapeNet's data.
During inference, I made the following changes: I firstly pre-processed ShapeNet's test set (surface samples) to get .ply files. Because these points have 0 SDF value, I just wrote the coordinates of these points and the 0 SDF value of into . npz files. Then I used the new generated .npz file to learn latent code during inference time. But in this way, the reconstructed mesh seemed not similar to ground truth.
I wonder if there is anything unreasonable about what I did?
The text was updated successfully, but these errors were encountered: