-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic Tutorial based on PyTorch MNIST ex #21
Conversation
Also, lint. |
examples/01_mnist_template.py
Outdated
linear1: LinearConf = LinearConf(in_features=9216, out_features=128) | ||
linear2: LinearConf = LinearConf(in_features=128, out_features=10) | ||
|
||
@dataclass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some configs for torch vision in NeMo:
https://github.com/NVIDIA/NeMo/blob/main-vis-res/nemo/collections/cv/datasets/configs.py
for transforms:
I can add them/port them to this repo next, makes sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely! Since you started working on the dataset portion of the project, I figured I'd work on all the other components first and give you some time =].
We should be able to generate these dataset configs using configen as well. Shall we set that up and create a PR? At least we can mirror the vision datasets in torchvision
. I guess for now transform
and target transform
have to be passthrough args.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once this is done, I will add them to the Intermediate Tutorial (there will be a separate PR for that soon) 🙂
Leaving configuration of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice.
Added some inline comments for your consideration.
examples/mnist_00.md
Outdated
##### Config Store | ||
*where we store our configs* | ||
|
||
Briefly, the concept behind the `ConfigStore` is to create a singleton object of this class and register all config objects to it. This tutorial demonstrates the simplest approach to using the `ConfigStore`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complexity here has multiple dimensions:
- Config style:
- File based
- Dataclass bases
- Dataclass as schema for files
- Config modeling:
- Single config
- Config groups
While pure dataclass approcach is self contained and easy to explain (and is also more appropriate for this tutorial), I think file based + single config is probably simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess since we're committing to the "dataclass bases" style, we should remove the term "config schema" from the tutorial? We aren't really using the dataclasses as config schema. I suppose the proper term would be "structured config templates"? @omry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just Structured Configs.
We can have an early paragraph that explains that Hydra supports config files and Structured Configs, which are configs defined by dataclasses. You can look at the Structured configs tutorial in Hydra for definitions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a more advanced version of the tutorial can get into dataclasses as schema for config files.
This in general will be much more elegant once Hydra has support for recursive default lists so we should probably wait for Hydra 1.1 for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that plan.
I'm going to remove a little more content from the Basic example to streamline it a bit more and migrate some to the WIP Intermediate.
Will do another read through tomorrow.
4f148ed
to
585fd50
Compare
dataset1 = datasets.MNIST("../data", train=True, download=True, transform=transform) | ||
dataset2 = datasets.MNIST("../data", train=False, transform=transform) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the path here takes some attention. since Hydra changes cwd you should probably use to_absolute_path(). see the basic tutorial for detaills.
adadelta: AdadeltaConf = AdadeltaConf() | ||
steplr: StepLRConf = StepLRConf( | ||
step_size=1 | ||
) # we pass a default for step_size since it is required, but missing a default in PyTorch (and consequently in hydra-torch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
black probably moved this comment.
scheduler.step() | ||
|
||
if cfg.save_model: # DIFF | ||
torch.save(model.state_dict(), cfg.checkpoint_name) # DIFF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
worth commenting that the checkpoint is saved to an automatically generated working directory here.
Fixes #13