bugfix in transmittance based early stop criteria #541
+12
−12
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I think there's a bug in terms of when to stop rendering if the transmittance gets near zero.
Intuitively, the stopping criteria checks if the transmittance for rendering "next Gaussian" is very low or not.
Therefore, it should render the "current Gaussian" and then break out of the loop.
However, the current implementation instead checks the criteria and stop immediately, without rendering the current one.
To be more specific, say there are only two Gaussians to be rendered for a pixel, where the first one to be rendered has alpha of 0.5, and the other one has alpha of 1.0. Obviously, because the second Gaussian has alpha of 1.0, the rendered alpha of the pixel should be 1.0.
However, the current implementation only renders the first one because while trying to render the second one, it detects that T will get near zero and doesn't render the second one, leaving the pixel's rendered alpha 0.5.
This can be verified by manually rendering two very bright Gaussians (with the second one being very large):
It renders this weird gray ball, while after applying this bugfix, it correctly renders white image.
If you cut and look at the middle section of the above image, it looks like this:
Here I made randomly initialized Gaussians to learn an all-white image. This behavior compromises stability of training, and it makes the 3DGS never be able to learn a truly saturated image. You can see this from the loss not being able to get reduced below a certain point. The output image has slightly different color because the current implementation struggles to render a truly white image.
If you discard the color values below 250 and map [250, 255] to be [0, 255] to see the effect better, you can clearly see the current implementation struggles because of the weird gray blobs.
Though I think this bug has minimal effect in practice, I still think it needs a fix ;)