Classification metrics for multiclass image segmentation with unlabeled pixels #861
Unanswered
perlmutter
asked this question in
Classification
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm doing multi-class semantic segmentation of images with only partial labeling. For example, a pixel label of 0/1/2/3 indicates background/class1/class2/unlabeled. Unlabeled/background pixels far outnumber either foreground class. During training I use
torch.nn.CrossEntropyLoss
withignore_index=3
, and the predicted pixel probabilities will only have dimension 3, not 4.I'd like to use torchmetrics to calculate pixel classification statistics (accuracy, recall, precision, jaccard, etc) but the metrics won't accept the shape of my predictions. For example,
produces
ValueError: The highest label in 'target' should be smaller than the size of the 'C' dimension of 'preds'.
As a workaround I could add a 4th "dummy" probability, set to zero for all pixels. However in that case, I'd like to ignore both the unlabeled "class" (3) as well as the background class (0), since background recall is not interesting and would overwhelm the foreground statistics, but
ignore_index
only accepts a single value. Is there a more elegant way to compute foreground class statistics in this scenario?Beta Was this translation helpful? Give feedback.
All reactions