We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi @rosinality, thank you a lot for your contribution.
I have a question regarding the upsampling of the top latent representation.
We use top decoder to upsample the quantized top representation by the factor of 2, during encoding process.
During decoding, the sampled codes are quantized and then upsampled using separated ConvTranspose layer (_upsample_t).
My question is - why cannot we use top decoder here again? why do we have to add that separated layer which learns that same mapping?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi @rosinality, thank you a lot for your contribution.
I have a question regarding the upsampling of the top latent representation.
We use top decoder to upsample the quantized top representation by the factor of 2, during encoding process.
During decoding, the sampled codes are quantized and then upsampled using separated ConvTranspose layer (_upsample_t).
My question is - why cannot we use top decoder here again? why do we have to add that separated layer which learns that same mapping?
The text was updated successfully, but these errors were encountered: