-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy metric does not work on Model predictions. #587
Comments
Hi! thanks for your contribution!, great first issue! |
meet the same problem when pytorch-lighting 1.2.0. when 1.1.6 run success |
I think this is because previous PL classification metrics does check if the preds are between 0 and 1 when they are floats, but just assumed they are probabilities, this could give you incorrect results without any warning. Now PL actually checks the values of preds if they are floats, and enforces the range to be [0, 1], so it looks like you need to add a sigmoid layer after your network output, to make sure they are probabilities, not logits. |
i solve this by add softmax to the logits, because pytorch crossentroyloss include softmax so the output not add this, but metris need |
@SkafteNicki Could you have a look at this issue? |
Our metrics interface does not currently support non-normalized inputs (e.g. logits), here is a open issue for it in the torchmetrics repo: Lightning-AI/torchmetrics#60. As @mlinxiang mentions, the solution for now is to apply softmax transformation to your input. |
@ajsanjoaquin I'm curious if just adding a softmax layer to your model did the trick to work lightning's metrics and led to a well-trained model? It seems like you shouldn't use What ended up working here? |
@dleve123 Yup. I added the softmax layer and used negative log-likelihood loss instead. |
Hi all.
Sorry for the confusion, thanks for your help. Great library, really helpful for beginners like me |
Can you specify the error you are getting? |
Hi @rchen19 thanks for the comment. The error I am getting is the following:
The code it is supposed to be running is:
in the
And the forward pass as:
Just like in the Masterclass Video Cheers |
@NicoMandel As the error message suggested, you need apply either sigmoid, for binary classification, or softmax for multiclass classification, to the logits you get from the model output, before you pass it to the metric. |
Hi, thank you for that.
or in the
I thought softmax and Cross-entropy should not be used together? Nielsen |
@NicoMandel depending on the loss you use, I see you are using |
Hello @NicoMandel , The only part you need to change is accuracy.
|
🐛 Bug
Error when using pytorch_lightning.metrics.Accuracy on logits, even though that is exactly the use-case stated in the docs.
To Reproduce
Steps to reproduce the behavior:
Using the accuracy metric on either the training or validation step with the expected logit inputs ((64 x 10), 10 = # of classes) and target vector (64) throws the error
ValueError: The `preds` should be probabilities, but values were detected outside of [0,1] range.
similar to #551.Code sample
Passing
torch.tensor(64, 3, 32, 32)
as training data should trigger the errorExpected behavior
self.train_acc(predictions, targets)
should compute the accuracyEnvironment
I used Google Colab
conda
,pip
, source): pipAdditional context
Workaround: I followed the advice in #551 and reinstalled pytorch lightning to 1.1.8.
Since this Lightning version triggers #6210 in Colab, I also had to reinstall a different version of torch:
The text was updated successfully, but these errors were encountered: