You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the article, in the second step, we calculate CrossEntropy loss of S' on samples from a separate quiz set, and then calculate the gradients of CE loss with respect to the parameters of teacher.
The problem is, teacher's parameters seem to be not related to student's loss, so when I run the code of meta_loops.py, in line 115: t_grads = torch.autograd.grad(s_prime_loss, t_model.parameters())
I only get None for all grads of teacher's parameters.
The text was updated successfully, but these errors were encountered:
According to the article, in the second step, we calculate CrossEntropy loss of S' on samples from a separate quiz set, and then calculate the gradients of CE loss with respect to the parameters of teacher.
The problem is, teacher's parameters seem to be not related to student's loss, so when I run the code of
meta_loops.py
, in line 115:t_grads = torch.autograd.grad(s_prime_loss, t_model.parameters())
I only get None for all grads of teacher's parameters.
The text was updated successfully, but these errors were encountered: