-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PixelSNAIL overfitting issue #66
Comments
Large gap between validation and test is somewhat strange. Anyway, I think there will be not many methods that can be applied to reduce such large overfitting. |
Rosinality, Thanks for the reply. |
I have tried to train on FFHQ. |
Rosinality, |
No, model in the paper is too large to use in my environments. In my cases I got 45% training accuracies for top level codes. |
Hi,Thank you for your VQ-vae2 PyTorch version! |
@wwlCape If you want to try 1024 model then you need to use bottom + middle + top models, and larger pixelsnail model. But I don't know this repository can replicate the results in the paper. |
OK, Thanks for your reply! |
Hi , First of all thanks for the implementation.
I have tried to train PixelSNAIL-bottom/top prior for 256(imagenet) and 512(gaming) resolution images but I found that both the models are causing overfitting issue .
Bottom-prior (Average train accuracy = 0.77 , validation accuracy: 0.67, test accuracy: 0.37), where train and validation split(9:1) from same datset of 5k images of 512*512 , testing data is another dataset of same class.
Top-prior (Average train accuracy = 0.97180 , validation: 0.88 , testing accuracy: 0.4 ) rest of the settings are same as bottom prior.
I have tried to use l2 regularization, augmentation dataset along with existing dropout but no success. Any lead would be helpful.
Thanks in advance.
The text was updated successfully, but these errors were encountered: