[Feature request]: Allow weights transfer and/or restarting from earlier checkpoint with experiment CLI #263
Labels
enhancement
New feature or request
training
Issues related to model training
ux
User experience, quality of life changes
Feature/behavior summary
To my knowledge, the experiment CLI doesn't support
load_from_checkpoint
orfrom_pretrained_encoder
type of restarts, which would be very useful for either restarting training (with more epochs, say), or to transfer an encoder to a new task.Request attributes
Related issues
No response
Solution description
An example solution would be including key/value pairs in a model/task YAML configuration that will let you set the type of reload a user would like to, and where the weights are located.
For example for continuing training:
To transfer the weights from a pretrained encoder, and load weights from a
wandb
artifact:The
method
key would determine whether you would useTask.load_from_checkpoint
orTask.from_pretrained_encoder
methods. Thetype
can be used to indicate the kind of checkpoint, i.e. a local file or fromwandb
. In the latter case, it would be a little bit more complicated as you would have would have use thewandb
API to initialize a run that doesn't synchronize, download the weight artifact, then map to a load method.An alternative interface could be to just pass those method names directly:
and
Additional notes
No response
The text was updated successfully, but these errors were encountered: