Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Steps #8

Open
AmingWu opened this issue Aug 29, 2021 · 1 comment
Open

Training Steps #8

AmingWu opened this issue Aug 29, 2021 · 1 comment

Comments

@AmingWu
Copy link

AmingWu commented Aug 29, 2021

Thanks for your code. I have studied your code. The training process contains two stages. Firstly, the teacher and student networks are separately trained based on the same dataset. Then, the distillation is performed based on the pre-trained teacher and student networks.

Could you tell me this is right?

Thank you.

@ggjy
Copy link
Owner

ggjy commented Aug 31, 2021

First, the Teacher netowrk is trained based on the dataset, and then we will freeze the Teacher's weight, finally the random initialized student will be trained by the GT and teacher.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants