Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix in transmittance based early stop criteria #541

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

wonjune23
Copy link

@wonjune23 wonjune23 commented Jan 17, 2025

I think there's a bug in terms of when to stop rendering if the transmittance gets near zero.
Intuitively, the stopping criteria checks if the transmittance for rendering "next Gaussian" is very low or not.
Therefore, it should render the "current Gaussian" and then break out of the loop.
However, the current implementation instead checks the criteria and stop immediately, without rendering the current one.

To be more specific, say there are only two Gaussians to be rendered for a pixel, where the first one to be rendered has alpha of 0.5, and the other one has alpha of 1.0. Obviously, because the second Gaussian has alpha of 1.0, the rendered alpha of the pixel should be 1.0.
However, the current implementation only renders the first one because while trying to render the second one, it detects that T will get near zero and doesn't render the second one, leaving the pixel's rendered alpha 0.5.

This can be verified by manually rendering two very bright Gaussians (with the second one being very large):

self.means = torch.tensor([[0, 0, 1],
                            [0, 0, 1.5]], device=self.device)
self.scales = torch.tensor([[1.5e1, 1.5e1, 1.5e1],
                            [1e9, 1e9, 1e9]], device=self.device)
d = 3
self.rgbs = torch.ones(self.num_points, d, device=self.device) * 1e9

u = torch.ones(self.num_points, 1, device=self.device)
v = torch.ones(self.num_points, 1, device=self.device)
w = torch.ones(self.num_points, 1, device=self.device)

self.quats = torch.cat(
    [
        torch.sqrt(1.0 - u) * torch.sin(2.0 * math.pi * v),
        torch.sqrt(1.0 - u) * torch.cos(2.0 * math.pi * v),
        torch.sqrt(u) * torch.sin(2.0 * math.pi * w),
        torch.sqrt(u) * torch.cos(2.0 * math.pi * w),
    ],
    -1,
)

self.opacities = torch.tensor([1e9, 1e9], device=self.device)

It renders this weird gray ball, while after applying this bugfix, it correctly renders white image.

current bugfix
current bugfix

If you cut and look at the middle section of the above image, it looks like this:

current bugfix
current_midsection bugfix_midsection

Here I made randomly initialized Gaussians to learn an all-white image. This behavior compromises stability of training, and it makes the 3DGS never be able to learn a truly saturated image. You can see this from the loss not being able to get reduced below a certain point. The output image has slightly different color because the current implementation struggles to render a truly white image.

current bugfix
current_training training
current_loss bugfix_loss

If you discard the color values below 250 and map [250, 255] to be [0, 255] to see the effect better, you can clearly see the current implementation struggles because of the weird gray blobs.

current bugfix
current_training_enhanced training_enhanced
  • (please ignore the RGB being crazy at the beginning. I didn't really discard the values below 250 so they became minus and got repeatedly underflown.)

Though I think this bug has minimal effect in practice, I still think it needs a fix ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant