-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attach data refactor and tuner bugs [4/n] #7258
Conversation
Hello @carmocca! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-04-30 13:34:56 UTC |
Codecov Report
@@ Coverage Diff @@
## master #7258 +/- ##
======================================
- Coverage 91% 91% -0%
======================================
Files 199 199
Lines 12808 12807 -1
======================================
- Hits 11688 11686 -2
- Misses 1120 1121 +1 |
Why is this one PR doing so many things?
|
All of those things are tied together. I just wrote the list of changes so it's easier for reviewers to follow. edit: clarified the change list at the top and opened #7262 with parts of this PR |
Co-authored-by: Nicki Skafte <[email protected]>
# links data to the trainer | ||
self.data_connector.attach_data(model, val_dataloaders=val_dataloaders, datamodule=datamodule) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side question: In trainer.fit we allow trainer.fit(datamodule) but not for test and validate. Do you know why? Did we decide explicit naming by keyword is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good eye. I actually plan to open a PR adding this.
Not doing it yet as it would give myself conflicts
|
||
max_lr: maximum learning rate to investigate | ||
|
||
num_training: number of learning rates to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not related to this PR. num_training
param doesn't seem intuitive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc: @SkafteNicki
I'm just moving the docstring around here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Co-authored-by: Kaushik B <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGMT !
What does this PR do?
Attach data refactor:
attach_data
to register theirDataLoader
s/DataModule
.DataLoaders
totrainer._launch()
Tuner bugs:
trainer.tuner.lr_find()
andtrainer.tuner.scale_batch_size()
now callstrainer.tune()
internally. This solves a bug where the trainer state was not set properly as we set it intrainer.tune()
trainer.tune()
now has arguments for the different tuning functions.trainer.tune()
now returns the tuning result. The result is a{'scale_batch_size': Optional[int], 'lr_find': Optional[_LRFinder]}
The tuning changes are tied to the
attach_data
changes because the tuner relied on thesetup_fit
function which has been removed in this PR.Before submitting
PR review