You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently configuration to train.py is handled with OmegaConf. This made more sense when the tasks (and accompanying trainer code) were fragmented, as we could easily define per-task configuration. Now that the trainer code that we would like to include in base TorchGeo are being generalized into things like ClassificationTask and SemanticSegmentationTaskand it is clear that more complicated training configurations won't be supported by torchgeo proper, it might make sense to pull out the OmegaConf part, and go with a more simple argparse based approach. Bonus: this would also allow us to get rid of a dependency. I'm not sure how exactly the argparse approach would work in all cases but it is worth more thought!
Lightning has a few pieces of docs that can help with this:
Whatever we settle on here should definitely still allow passing arguments via a YAML config file. This allows reproducible benchmark experiment configurations to be saved in source control.
The text was updated successfully, but these errors were encountered:
In #226 I did a pretty impressive job of abusing the configs for the sake of testing. Once the release is finished, we'll want to go back through this and clean things up. First order of business is to create a tests/conf directory to hold testing-specific config settings so that we don't need to abuse conf/task_defaults anymore. Second order of business is to try to reduce some of the duplication between conf/*.yaml and conf/task_defaults/*.yaml. Third order of business is to try and improve documentation/CLI so it's easier to tell what settings are available. I also want to see if there's some kind of validation thing we could use to ensure that all settings in a config file are actually valid.
In #286 we changed the number of classes in the task_default config files to make the tests pass. As a side effect, this breaks training runs where the config files don't have the correct number of classes specified. Users shouldn't need to figure out / input the correct number of classes for a dataset in a config file in order to run train.py.
Currently configuration to
train.py
is handled with OmegaConf. This made more sense when the tasks (and accompanying trainer code) were fragmented, as we could easily define per-task configuration. Now that the trainer code that we would like to include in base TorchGeo are being generalized into things likeClassificationTask
andSemanticSegmentationTask
and it is clear that more complicated training configurations won't be supported by torchgeo proper, it might make sense to pull out the OmegaConf part, and go with a more simpleargparse
based approach. Bonus: this would also allow us to get rid of a dependency. I'm not sure how exactly the argparse approach would work in all cases but it is worth more thought!Lightning has a few pieces of docs that can help with this:
Whatever we settle on here should definitely still allow passing arguments via a YAML config file. This allows reproducible benchmark experiment configurations to be saved in source control.
The text was updated successfully, but these errors were encountered: