You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You would need to train the model from scratch, because a G-Conv has a different number of parameters from a conv. To turn any CNN architecture into a G-CNN, you need to replace every layer with an equivariant counterpart. For VGG, I think it would just amount to replacing each conv layer by a G-conv layer and then re-training. But I always recommend checking whether the architecture you made is actually equivariant, e.g. by feeding an input image and a 90-degree rotated one, and seeing whether the output of the last layer transforms in the way you expect (e.g. by rotating and channel-shuffling).
Thank you for your explanations. I notice that you listed results on Cifar 10 datasets. Have you pretrained your own model on large-scale datasets, such as ImageNet. Is that possible to share your pretrained model and weights on Cloud, please?
Dear Dr. Cohen.
Can we simply use gconv2d as an replacement of conv2d in any standard architecture, such as VGG16?
Or we have to train the model from scratch?
Best wishes
The text was updated successfully, but these errors were encountered: