Skip to content

How to avoid NaN in training #149

@frickyinn

Description

@frickyinn

d = torch.log(depth_est[mask]) - torch.log(depth_gt[mask])

Nan may occur while training because the final depth estimation sometimes yields 0 values. However, the loss function never tries to avoid log(0).
The solution to the situation is simply changing to this:

d = torch.log(depth_est[mask] + 1e-4) - torch.log(depth_gt[mask]) 

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions