-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detach in Lab3-2 & 3-3 #20
Comments
Hi @pandasfang, In Now, you may have another question. Why do we call detach in this line Considering the following example which is a very simple auto-encoder.
The output would be :
You can find that there's no influence on the gradients of fc2 though we detach the result from fc1. Once we know that the gradient won't be influenced, we can simply use the Thanks |
I think it's a good question and you guys can verify if what I told is right (maybe I am wrong because I am still learning, too :) ). If possible, please keep this thread open, and I think it would be helpful for people who want to know more about It's also highly welcome to discuss with me. Thanks |
Soumith's reply in this thread might also clarify things a little bit... |
Hi @yyrkoon27 , In this case, it's right. In VAE-GAN, the detach function may be needed for the correctness if you use, for example, |
Dear TA:
In the Lab3-2. why don't we need to detach Discriminator when we backward propagate Generator?
The text was updated successfully, but these errors were encountered: