Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Structured Configs with URL for torch.optim and torch.utils.data #18

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
2 changes: 2 additions & 0 deletions config/torch/optim/adadelta.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not change the headers until we resolve the discussion about it.

#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,7 @@

@dataclass
class AdadeltaConf:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are you changing the formatting of generated code?

_target_: str = "torch.optim.adadelta.Adadelta"
params: Any = MISSING
lr: Any = 1.0
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/adagrad.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class AdagradConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.Adagrad
"""
Comment on lines +17 to +19
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's hold on with this change until we finalize the change to configen. This is not urgent and we can do it with configen once it's ready.

_target_: str = "torch.optim.adagrad.Adagrad"
params: Any = MISSING
lr: Any = 0.01
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/adam.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class AdamConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.Adam
"""
_target_: str = "torch.optim.adam.Adam"
params: Any = MISSING
lr: Any = 0.001
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/adamax.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class AdamaxConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.Adamax
"""
_target_: str = "torch.optim.adamax.Adamax"
params: Any = MISSING
lr: Any = 0.002
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/adamw.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class AdamWConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.AdamW
"""
_target_: str = "torch.optim.adamw.AdamW"
params: Any = MISSING
lr: Any = 0.001
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/asgd.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class ASGDConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.ASGD
"""
_target_: str = "torch.optim.asgd.ASGD"
params: Any = MISSING
lr: Any = 0.01
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/lbfgs.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class LBFGSConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.LBFGS
"""
_target_: str = "torch.optim.lbfgs.LBFGS"
params: Any = MISSING
lr: Any = 1
Expand Down
11 changes: 11 additions & 0 deletions config/torch/optim/lr_scheduler.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,7 @@

@dataclass
class LambdaLRConf:

_target_: str = "torch.optim.lr_scheduler.LambdaLR"
optimizer: Any = MISSING
lr_lambda: Any = MISSING
Expand All @@ -21,6 +23,7 @@ class LambdaLRConf:

@dataclass
class MultiplicativeLRConf:

_target_: str = "torch.optim.lr_scheduler.MultiplicativeLR"
optimizer: Any = MISSING
lr_lambda: Any = MISSING
Expand All @@ -29,6 +32,7 @@ class MultiplicativeLRConf:

@dataclass
class StepLRConf:

_target_: str = "torch.optim.lr_scheduler.StepLR"
optimizer: Any = MISSING
step_size: Any = MISSING
Expand All @@ -38,6 +42,7 @@ class StepLRConf:

@dataclass
class MultiStepLRConf:

_target_: str = "torch.optim.lr_scheduler.MultiStepLR"
optimizer: Any = MISSING
milestones: Any = MISSING
Expand All @@ -47,6 +52,7 @@ class MultiStepLRConf:

@dataclass
class ExponentialLRConf:

_target_: str = "torch.optim.lr_scheduler.ExponentialLR"
optimizer: Any = MISSING
gamma: Any = MISSING
Expand All @@ -55,6 +61,7 @@ class ExponentialLRConf:

@dataclass
class CosineAnnealingLRConf:

_target_: str = "torch.optim.lr_scheduler.CosineAnnealingLR"
optimizer: Any = MISSING
T_max: Any = MISSING
Expand All @@ -64,6 +71,7 @@ class CosineAnnealingLRConf:

@dataclass
class ReduceLROnPlateauConf:

_target_: str = "torch.optim.lr_scheduler.ReduceLROnPlateau"
optimizer: Any = MISSING
mode: Any = min
tkornuta-nvidia marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -79,6 +87,7 @@ class ReduceLROnPlateauConf:

@dataclass
class CyclicLRConf:

_target_: str = "torch.optim.lr_scheduler.CyclicLR"
optimizer: Any = MISSING
base_lr: Any = MISSING
Expand All @@ -97,6 +106,7 @@ class CyclicLRConf:

@dataclass
class CosineAnnealingWarmRestartsConf:

_target_: str = "torch.optim.lr_scheduler.CosineAnnealingWarmRestarts"
optimizer: Any = MISSING
T_0: Any = MISSING
Expand All @@ -107,6 +117,7 @@ class CosineAnnealingWarmRestartsConf:

@dataclass
class OneCycleLRConf:

_target_: str = "torch.optim.lr_scheduler.OneCycleLR"
optimizer: Any = MISSING
max_lr: Any = MISSING
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/rmsprop.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class RMSpropConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.RMSprop
"""
_target_: str = "torch.optim.rmsprop.RMSprop"
params: Any = MISSING
lr: Any = 0.01
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/rprop.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class RpropConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.Rprop
"""
_target_: str = "torch.optim.rprop.Rprop"
params: Any = MISSING
lr: Any = 0.01
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/sgd.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class SGDConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.SGD
"""
_target_: str = "torch.optim.sgd.SGD"
params: Any = MISSING
lr: Any = MISSING # _RequiredParameter
Expand Down
4 changes: 4 additions & 0 deletions config/torch/optim/sparse_adam.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -13,6 +14,9 @@

@dataclass
class SparseAdamConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/optim.html#torch.optim.SparseAdam
"""
_target_: str = "torch.optim.sparse_adam.SparseAdam"
params: Any = MISSING
lr: Any = 0.001
Expand Down
33 changes: 33 additions & 0 deletions config/torch/utils/data.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Copyright (c) 2020 Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
# fmt: off
# isort:skip_file
# flake8: noqa

from dataclasses import dataclass, field
from omegaconf import MISSING
from typing import Any


@dataclass
class DataLoaderConf:
"""For more details on parameteres please refer to the original documentation:
https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
"""
_target_: str = "torch.utils.data.DataLoader"
dataset: Any = MISSING
batch_size: Any = 1
shuffle: Any = False
sampler: Any = None
batch_sampler: Any = None
num_workers: Any = 0
collate_fn: Any = None
pin_memory: Any = False
drop_last: Any = False
timeout: Any = 0
worker_init_fn: Any = None
multiprocessing_context: Any = None
generator: Any = None
15 changes: 15 additions & 0 deletions conf/configen.yaml → sources/torch.optim/configen.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
# Copyright (c) 2020 Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT

configen:
# output directory
output_dir: ${hydra:runtime.cwd}

header: |
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# SPDX-License-Identifier: MIT
#
# Generated by configen, do not edit.
# See https://github.com/facebookresearch/hydra/tree/master/tools/configen
Expand All @@ -21,56 +25,67 @@ configen:
- Adadelta

- name: torch.optim.adagrad
url: https://pytorch.org/docs/stable/optim.html#torch.optim.Adagrad
# for each module, a list of classes
classes:
- Adagrad

- name: torch.optim.adam
url: https://pytorch.org/docs/stable/optim.html#torch.optim.Adam
# for each module, a list of classes
classes:
- Adam

- name: torch.optim.adamw
url: https://pytorch.org/docs/stable/optim.html#torch.optim.AdamW
# for each module, a list of classes
classes:
- AdamW

- name: torch.optim.sparse_adam
url: https://pytorch.org/docs/stable/optim.html#torch.optim.SparseAdam
# for each module, a list of classes
classes:
- SparseAdam

- name: torch.optim.adamax
url: https://pytorch.org/docs/stable/optim.html#torch.optim.Adamax
# for each module, a list of classes
classes:
- Adamax

- name: torch.optim.asgd
url: https://pytorch.org/docs/stable/optim.html#torch.optim.ASGD
# for each module, a list of classes
classes:
- ASGD

- name: torch.optim.sgd
url: https://pytorch.org/docs/stable/optim.html#torch.optim.SGD
# for each module, a list of classes
classes:
- SGD

- name: torch.optim.rprop
url: https://pytorch.org/docs/stable/optim.html#torch.optim.Rprop
# for each module, a list of classes
classes:
- Rprop

- name: torch.optim.rmsprop
url: https://pytorch.org/docs/stable/optim.html#torch.optim.RMSprop
# for each module, a list of classes
classes:
- RMSprop

- name: torch.optim.lbfgs
url: https://pytorch.org/docs/stable/optim.html#torch.optim.LBFGS
# for each module, a list of classes
classes:
- LBFGS

- name: torch.optim.lr_scheduler
url:
classes:
# Schedulers
- LambdaLR
Expand Down
Loading