Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test accuracy changes based on batch size #9

Open
jonkoi opened this issue Aug 1, 2018 · 1 comment
Open

Test accuracy changes based on batch size #9

jonkoi opened this issue Aug 1, 2018 · 1 comment

Comments

@jonkoi
Copy link

jonkoi commented Aug 1, 2018

Hi,

I trained BNN_cifar10 with batch size 256 and got the accuracy of 91.2% (higher than baseline cifar10) . However, when I use the evaluation script alone to test the model. I noticed that both the confidence and the test accuracy worsens as the test batch size get smaller, which should not be the case. The extreme case is batch size of 1. In this case, the model will only have 1 output. When I look at the output for each image (batch size 1), they are barely different. I checked if the input is different because of batch size but they are the same.

Do you know what can cause this?

@mikuhatsune
Copy link

I think BatchNorm is handled incorrectly during evaluation...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants