-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I finetune CPTForMaskedLM? #67
Comments
Sure! If you calculate loss on the G-Dec logits, you can fine-tune both the CPT Encoder and G-Decoder. In this case, the U-Decoder is not used. If you want to only tune the G-Dec and leave the Encoder unchanged, you can fix the parameters of Encoder by not updating them in the optimizer. And only update the parameters of G-Dec. |
Thanks a lot for your reply. During the finetuning of CPTForMaskedLM, I need to add tokens to the tokenizer ( Codes:
Returns
I brute force the fix by tallying the dimension of |
This fix is ok since the |
Thanks. May I not close this issue for a while? As I'm pursuing the fine-tuning and may encounter issues very soon... |
First, I would like to thank for the great work. Appreciated.
As stated in my question, I would like to try finetuning CPTForMakedLM and not sure if I could just say finetuning the decoder by training on the output logits? Sorry for this naive question as I'm new in this field. Thank you.
The text was updated successfully, but these errors were encountered: