Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If I follow the default setting and run python main/train.py --cfg configs/cifar10.yaml, I cannot achieve similar results as reported in your paper.At the beginning, the loss of training set decreases, and the accuracy of verification set increases. However, after running for a period of time, the loss of training set will increase, and the accuracy of verification set will decrease, and fluctuate greatly. What's the matter? #19

Open
ghost opened this issue Dec 12, 2020 · 3 comments

Comments

@ghost
Copy link

ghost commented Dec 12, 2020

If I apply it to my own dataset, the above problems will occur.

@ghost ghost changed the title I use it in my own data set. Why does the accuracy of training set increase and loss decrease, but the loss and accuracy of verification set are random, sometimes large and sometimes small? If I follow the default setting and run python main/train.py --cfg configs/cifar10.yaml, I cannot achieve similar results as reported in your paper.At the beginning, the loss of training set decreases, and the accuracy of verification set increases. However, after running for a period of time, the loss of training set will increase, and the accuracy of verification set will decrease, and fluctuate greatly. What's the matter? Dec 14, 2020
@liangzz1991
Copy link

same question......and find this phenomenon occurs in half of the total epoch.....

@costantine20
Copy link

do you solve the problem?

@sunjiaqi401
Copy link

I also have this problem. For a long time, the accuracy of the test set does not rise, and then after such a period of time, it begins to rise again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants