-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to resume CoCondenser pretraining #9
Comments
Please elaborate on the issue. Include what you did, what worked and did not worked, error messages, etc. |
Here is the way to reproduce the exception. I first start training from the model downloaded from huggingface.
I tried to load the checkpoint from the first step. --
and here is the exception.
We can surpass this exception by adding the two flags here. And execute the same command above to, here are the warnings (clipped but basically all the layers)
|
The attribute
|
Thank you for the reply! |
Would it makes more sense to put the path of the checkpoint we want to resume from at here (like |
Yes, and the This is more or less a WAR. Eventually, I probably need to patch the |
Maybe I am missing something, but from what I can read, using |
The model checkpoints seem to be hard-coded as the BertForMaskedLM and are unable to load but to the CoCondensor class.
Adding the following attributes in the initialization can surpass the exceptions but all the weights were not loaded.
Is there a way to resume training after interruptions?
Thanks!
The text was updated successfully, but these errors were encountered: