You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I wanted to try your tool to normalize a large set of HPLC data run over multiple batches, but ran into this error message you can see below. I tested it also on a small subset and it is consistently occurring right when it reaches 1000/1710 iterations. Do you have an idea what could be the issue?
Traceback (most recent call last): | 1/19 [00:07<02:15, 7.55s/it]
File "/PATHTO/software/NormAE-release/main.py", line 83, in
main()
File "/PATHTO/software/NormAE-release/main.py", line 49, in main
best_models, hist, early_stop_objs = trainer.fit(datas)
File "/PATHTO/software/NormAE-release/train.py", line 111, in fit
self._forward_discriminate(batch_x, batch_y)
File "/PATHTO/software/NormAE-release/train.py", line 380, in _forward_discriminate
batch_y[:, 1].long())
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/functional.py", line 1995, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/functional.py", line 1824, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /tmp/pip-req-build-p5q91txh/aten/src/THNN/generic/ClassNLLCriterion.c:94
disc_pretrain: 58%|███████████████████████████████████████████████▎ | 1000/1710 [7:50:53<5:34:19, 28.25s/it]
The text was updated successfully, but these errors were encountered:
Hi, I wanted to try your tool to normalize a large set of HPLC data run over multiple batches, but ran into this error message you can see below. I tested it also on a small subset and it is consistently occurring right when it reaches 1000/1710 iterations. Do you have an idea what could be the issue?
Traceback (most recent call last): | 1/19 [00:07<02:15, 7.55s/it]
File "/PATHTO/software/NormAE-release/main.py", line 83, in
main()
File "/PATHTO/software/NormAE-release/main.py", line 49, in main
best_models, hist, early_stop_objs = trainer.fit(datas)
File "/PATHTO/software/NormAE-release/train.py", line 111, in fit
self._forward_discriminate(batch_x, batch_y)
File "/PATHTO/software/NormAE-release/train.py", line 380, in _forward_discriminate
batch_y[:, 1].long())
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/functional.py", line 1995, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/PATHTOCONDA/anaconda3/envs/NormAE/lib/python3.6/site-packages/torch/nn/functional.py", line 1824, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /tmp/pip-req-build-p5q91txh/aten/src/THNN/generic/ClassNLLCriterion.c:94
disc_pretrain: 58%|███████████████████████████████████████████████▎ | 1000/1710 [7:50:53<5:34:19, 28.25s/it]
The text was updated successfully, but these errors were encountered: