-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Loss increases #24
Comments
Does it increase consistently, or only for 1-2 epochs? |
For the first 4 epochs its like : |
Does not seem right. Try reducing your learning rates, and see what you get. You might also be interested in trying a bunch of different learning rates in parallel, and see which one converges the fastest. |
I tried reducing the learning rate and yes now the loss decreases generally. |
I did this project a long time ago, and unfortunately I don't have access to most of the data associated with this project anymore (I graduated undergrad, and do not have access to my machine there anymore). I remember getting competitive performance on the benchmark around 50-60 epochs, so you might want to try that. |
Ok , thanks :) let me run a couple more epochs and I will let you know |
Is the loss being output on trainLSTM_1.py , the real train loss or just the loss of some random epoch ?
Cause My training loss seems to increase after 2 epochs ...
FYI : I have used the glove vectors as the word vectors.
The text was updated successfully, but these errors were encountered: