Why does accuracy need argmax and not loss function when there are multiple class output probabilities? #907
sohumgautam17
started this conversation in
General
Replies: 1 comment
-
In background,
if you do print statements like this you can see what goes on background print(f'Loss calculation: {loss_fn(y_eval, y)}')
loss += loss_fn(y_eval, y)
print('Actual label:', y)
print('Predicted label:', y_eval.argmax(dim=1))
acc = accuracy_fn(y, y_eval.argmax(dim=1)) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Why does accuracy get argmax and loss not? This is in the computer vision section of the course @ 16:48 but its also throughout that section. Heres the code snippet:
y_eval = model(X)
loss+=loss_fn(y_eval, y)
acc = accuracy_fn(y, y_eval.argmax(dim=1))
This is the rest of the training loop
"""
torch.manual_seed(42)
def evalmodel(model:torch.nn.Module, dataloader:torch.utils.data, loss_fn:torch.nn.Module, accuracy_fn):
loss, acc = 0, 0
torch.eval()
with torch.inference_mode():
for X, y in dataloader:
"""
Beta Was this translation helpful? Give feedback.
All reactions