You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 15, 2023. It is now read-only.
Yes, i tried to reuse the original weights of a conv net trained on MNIST dataset. when there are just two layers, and alpha set to 0.5, use the original weights to init the octave conv parameters, can recover an accuracy of 91% vs 98% of the original.
but when it comes to 3 layers of convolution, the accuracy drops to 17%.
i think it will get much worse after the network goes deeper.
so my answer is no. currently, it seems that we cannot adapt the finetune strategy to use OctaveConv.
---Original---
From: "yafeng"<[email protected]>
Date: Fri, May 15, 2020 19:19 PM
To: "facebookresearch/OctConv"<[email protected]>;
Cc: "Comment"<[email protected]>;"Hexuanfang"<[email protected]>;
Subject: Re: [facebookresearch/OctConv] is it possible to finetune from original weights? (#14)
Yes, i tried to reuse the original weights of a conv net trained on MNIST dataset. when there are just two layers, and alpha set to 0.5, use the original weights to init the octave conv parameters, can recover an accuracy of 91% vs 98% of the original.
but when it comes to 3 layers of convolution, the accuracy drops to 17%.
i think it will get much worse after the network goes deeper.
so my answer is no. currently, it seems that we cannot adapt the finetune strategy to use OctaveConv.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
or we need to train from scratch every time?
The text was updated successfully, but these errors were encountered: