Skip to content
This repository has been archived by the owner on Jun 15, 2023. It is now read-only.

is it possible to finetune from original weights? #14

Open
yyfcc17 opened this issue May 11, 2020 · 3 comments
Open

is it possible to finetune from original weights? #14

yyfcc17 opened this issue May 11, 2020 · 3 comments

Comments

@yyfcc17
Copy link

yyfcc17 commented May 11, 2020

or we need to train from scratch every time?

@yyfcc17 yyfcc17 changed the title is it possible to finetune fro original weights? is it possible to finetune from original weights? May 11, 2020
@Hexuanfang
Copy link

hi, did you figure it out? i have the some question

@yyfcc17
Copy link
Author

yyfcc17 commented May 15, 2020

Yes, i tried to reuse the original weights of a conv net trained on MNIST dataset. when there are just two layers, and alpha set to 0.5, use the original weights to init the octave conv parameters, can recover an accuracy of 91% vs 98% of the original.
but when it comes to 3 layers of convolution, the accuracy drops to 17%.
i think it will get much worse after the network goes deeper.
so my answer is no. currently, it seems that we cannot adapt the finetune strategy to use OctaveConv.

@Hexuanfang
Copy link

Hexuanfang commented May 16, 2020 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants