You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great job. It is known that image can be converted to its corresponding spectrum and vice versa. So it can be assumed that image pixel and spectrum ditribution is equivalent to each other. I have noticed that there is no MSE loss in your G loss since a random noise vector is used as the input. Is spectral loss still necessary and can be helpful when we are doing a supervised training and MSE loss is included in G loss, for example in super resolution tasks?
The text was updated successfully, but these errors were encountered:
Thank you for the interesting question! To be honest, we havent done any experiment regarding super resolution tasks. Nevertheless, I can imagine that they might suffer from artifacts in their spectrum as well. Therefore, our approach could be a solution to the problem.
As you mentioned, there is no MSE in our experiments with GANs, however in the paper we present an AE toy example (with MSE), where spectral loss shows beneficial effects.
In my experiment,
(1) use only mse loss to train a sr model called model1 (transposeconv).
(2) use sp loss to mse loss together to train another sr model called model2 (nearest + 3conv).
I think this is similar to your AE toy model. But I have found that
(1) only a small part of training sample cause fake high frequency component after inference using model1.
(2) fake high frequency component can be reduced little when using model2.
I don't know why this happen. What upscale factor and dataset do you use in your AE? or do you think there are any other reasons that can cause it?
Thanks for your great job. It is known that image can be converted to its corresponding spectrum and vice versa. So it can be assumed that image pixel and spectrum ditribution is equivalent to each other. I have noticed that there is no MSE loss in your G loss since a random noise vector is used as the input. Is spectral loss still necessary and can be helpful when we are doing a supervised training and MSE loss is included in G loss, for example in super resolution tasks?
The text was updated successfully, but these errors were encountered: