-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add BackboneLambdaFinetunningCallback #5377
Conversation
Hello @tchaton! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-01-08 17:42:50 UTC |
Codecov Report
@@ Coverage Diff @@
## release/1.2-dev #5377 +/- ##
================================================
- Coverage 93% 93% -0%
================================================
Files 150 151 +1
Lines 10490 10592 +102
================================================
+ Hits 9719 9808 +89
- Misses 771 784 +13 |
Looks good to me, I just would also expose the |
Co-authored-by: Justus Schock <[email protected]>
Co-authored-by: Justus Schock <[email protected]>
def finetunning_function(self, pl_module: pl.LightningModule, epoch: int, optimizer: Optimizer, opt_idx: int): | ||
if epoch == self.milestones[0]: | ||
unfreeze_and_add_param_group( | ||
module=pl_module.feature_extractor[-5:], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add some info of what is actually be unfrozen here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from small spelling nits, everything looks good to me! Also like the new hook, was thinking if we could simplify the name to on_before_accelerator_setup
but either way, more hooks the better!
…hLightning/pytorch-lightning into feat/finetunning_callback
class FinetunningBoringModel(BoringModel): | ||
def __init__(self): | ||
super().__init__() | ||
self.backbone = nn.Sequential(nn.Linear(32, 32, bias=False), nn.BatchNorm1d(32), nn.ReLU()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it work without a backbone, with pure BoringModel
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it expects a module backbone
.
@@ -27,6 +27,10 @@ class Callback(abc.ABC): | |||
Subclass this class and override any of the relevant hooks | |||
""" | |||
|
|||
def on_before_accelerator_backend_setup(self, trainer, pl_module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to add a new callback hook? can't it be done with the existing ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question !
We have on_init_start, on_init_end
in init function. We don't have access to the model.
Then we have on_fit_start in fit function, but configure_optimizers
had already being called during accelerator_backend.setup().
Therefore, I needed to introduce a new callback, so I could freeze the model before configure_optimizers
call and then filter parameters if they don't require gradients with
optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()), lr=self.lr)
Hope it answer your question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, I think on_before_configure_optimizers
would be a better name. Easier for users to understand.
Can you also mention the added hook in the CHANGELOG?
Co-authored-by: Carlos Mocholí <[email protected]>
Co-authored-by: Carlos Mocholí <[email protected]>
…hLightning/pytorch-lightning into feat/finetunning_callback
What does this PR do?
This PR adds a Fine-tuning Callback + a new callback
on_before_accelerator_backend_setup
+ update fine-tuning example to use the new Callback. The example was broken and didn't converge.Fixes # (issue) <- this links related issue to this PR
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃