Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade template 1.4.0->2.0.2 #157

Draft
wants to merge 24 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
38313bc
Update torch requirement from ~=1.13.1 to ~=2.0.1
dependabot[bot] May 9, 2023
4faa318
Update to a newer torchvision that matches torch 2.0.1.
mzweilin Jun 2, 2023
9df5f52
Upgrade pytorch-lightning and torchmetrics.
mzweilin Jun 2, 2023
ccd5406
Merge branch 'main' into dependabot/pip/torch-approx-eq-2.0.1
mzweilin Jun 2, 2023
098daf8
Update pre-commit.
mzweilin Jun 5, 2023
05bbf0d
datamodule->data
mzweilin Jun 5, 2023
395692d
Add data.num_classes for the Accuracy metric of classification.
mzweilin Jun 5, 2023
4c7c04b
datamodule->data extra.
mzweilin Jun 5, 2023
a0ba7ba
pytorch_lightning->lightning.pytorch
mzweilin Jun 5, 2023
ff114a1
Hydra 1.2->1.3
mzweilin Jun 5, 2023
b5bdf31
Split utils.
mzweilin Jun 5, 2023
31145b7
Update to PL's new API.
mzweilin Jun 5, 2023
03d4c47
Add aim logger.
mzweilin Jun 6, 2023
a29a317
Add cpu trainer.
mzweilin Jun 6, 2023
27cbe96
Fix paths of lightning.pytorch and mart.data.
mzweilin Jun 6, 2023
5597103
Update comment.
mzweilin Jun 6, 2023
5eec5e3
Dependency: pytorch_lightning -> lightning
mzweilin Jun 6, 2023
1c83e34
Update to new API of mAP in torchmetrics.
mzweilin Jun 6, 2023
cbd2fcf
Fix path of pytorch_lightning -> lightning.pytorch.
mzweilin Jun 6, 2023
fbf1f50
Merge branch 'main' into upgrade_template_2.0.2
mzweilin Jul 10, 2023
1b3cda3
Pin pydantic==1.10.11
mzweilin Jul 10, 2023
ba7aab3
Hide some folders in examples.
mzweilin Jul 10, 2023
eb8e9c6
Fix test_resume.
mzweilin Jul 10, 2023
5b5a91f
Fix torchmetrics configs.
mzweilin Jul 10, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,9 @@ dmypy.json

# Lightning-Hydra-Template
configs/local/default.yaml
data/
logs/
/data/
/logs/
.env
.autoenv

# Aim logging
.aim
54 changes: 38 additions & 16 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ default_language_version:

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.4.0
hooks:
# list of supported hooks: https://pre-commit.com/hooks.html
- id: trailing-whitespace
Expand All @@ -19,7 +19,7 @@ repos:

# python code formatting
- repo: https://github.com/psf/black
rev: 22.6.0
rev: 23.1.0
hooks:
- id: black
args: [--line-length, "99"]
Expand All @@ -33,55 +33,56 @@ repos:

# python upgrading syntax to newer version
- repo: https://github.com/asottile/pyupgrade
rev: v2.32.1
rev: v3.3.1
hooks:
- id: pyupgrade
args: [--py38-plus]

# python docstring formatting
- repo: https://github.com/myint/docformatter
rev: v1.4
rev: v1.5.1
hooks:
- id: docformatter
args: [--in-place, --wrap-summaries=99, --wrap-descriptions=99]

# python check (PEP8), programming errors and code complexity
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
rev: 6.0.0
hooks:
- id: flake8
# ignore E203 because black is used for formatting.
args:
[
"--ignore",
"E203,E501,F401,F403,F841,W504",
"--extend-ignore",
"E203,E402,E501,F401,F403,F841",
"--exclude",
"logs/*,data/*",
]

# python security linter
- repo: https://github.com/PyCQA/bandit
rev: "1.7.1"
rev: "1.7.5"
hooks:
- id: bandit
args: ["-s", "B101"]

# yaml formatting
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v2.7.1
rev: v3.0.0-alpha.6
hooks:
- id: prettier
types: [yaml]
exclude: "environment.yaml"

# jupyter notebook cell output clearing
- repo: https://github.com/kynan/nbstripout
rev: 0.5.0
# shell scripts linter
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.9.0.2
hooks:
- id: nbstripout
- id: shellcheck

# md formatting
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.14
rev: 0.7.16
hooks:
- id: mdformat
args: ["--number"]
Expand All @@ -94,9 +95,30 @@ repos:

# word spelling linter
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
rev: v2.2.4
hooks:
- id: codespell
args:
- --skip=logs/**,data/**
- --skip=logs/**,data/**,*.ipynb
- --ignore-words-list=abc,def,gard

# jupyter notebook cell output clearing
- repo: https://github.com/kynan/nbstripout
rev: 0.6.1
hooks:
- id: nbstripout

# jupyter notebook linting
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.6.3
hooks:
- id: nbqa-black
args: ["--line-length=99"]
- id: nbqa-isort
args: ["--profile=black"]
- id: nbqa-flake8
args:
[
"--extend-ignore=E203,E402,E501,F401,F841",
"--exclude=logs/*,data/*",
]
4 changes: 4 additions & 0 deletions examples/carla_overhead_object_detection/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Default folder for downloading dataset
data

logs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
defaults:
- COCO_TorchvisionFasterRCNN
- override /model/[email protected]: tuple_tensorizer_normalizer
- override /datamodule: armory_carla_over_objdet_perturbable_mask
- override /data: armory_carla_over_objdet_perturbable_mask

task_name: "ArmoryCarlaOverObjDet_TorchvisionFasterRCNN"
tags: ["regular_training"]
Expand Down
4 changes: 4 additions & 0 deletions examples/robust_bench/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Default folder for downloading dataset
data

logs
2 changes: 1 addition & 1 deletion mart/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import importlib

from mart import attack as attack
from mart import datamodules as datamodules
from mart import data as data
from mart import models as models
from mart import nn as nn
from mart import optim as optim
Expand Down
3 changes: 1 addition & 2 deletions mart/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@

@hydra.main(version_base="1.2", config_path=config_path, config_name="lightning.yaml")
def main(cfg: DictConfig) -> float:

if cfg.resume is None and ("datamodule" not in cfg or "model" not in cfg):
if cfg.resume is None and ("data" not in cfg or "model" not in cfg):
log.fatal("")
log.fatal("Please specify an experiment to run, e.g.")
log.fatal(
Expand Down
10 changes: 4 additions & 6 deletions mart/attack/adversary.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
from itertools import cycle
from typing import TYPE_CHECKING, Any, Callable

import pytorch_lightning as pl
import torch
from lightning import pytorch as pl

from mart.utils import silent

Expand Down Expand Up @@ -140,12 +140,10 @@ def training_step(self, batch, batch_idx):
return gain

def configure_gradient_clipping(
self, optimizer, optimizer_idx, gradient_clip_val=None, gradient_clip_algorithm=None
self, optimizer, gradient_clip_val=None, gradient_clip_algorithm=None
):
# Configuring gradient clipping in pl.Trainer is still useful, so use it.
super().configure_gradient_clipping(
optimizer, optimizer_idx, gradient_clip_val, gradient_clip_algorithm
)
super().configure_gradient_clipping(optimizer, gradient_clip_val, gradient_clip_algorithm)

if self.gradient_modifier:
for group in optimizer.param_groups:
Expand Down Expand Up @@ -195,7 +193,7 @@ def attacker(self):

elif self.device.type == "cpu":
accelerator = "cpu"
devices = None
devices = 1

else:
raise NotImplementedError
Expand Down
2 changes: 1 addition & 1 deletion mart/attack/perturber.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from typing import TYPE_CHECKING, Iterable

import torch
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from lightning.pytorch.utilities.exceptions import MisconfigurationException

from .projector import Projector

Expand Down
2 changes: 1 addition & 1 deletion mart/callbacks/eval_mode.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# SPDX-License-Identifier: BSD-3-Clause
#

from pytorch_lightning.callbacks import Callback
from lightning.pytorch.callbacks import Callback

__all__ = ["AttackInEvalMode"]

Expand Down
2 changes: 1 addition & 1 deletion mart/callbacks/no_grad_mode.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# SPDX-License-Identifier: BSD-3-Clause
#

from pytorch_lightning.callbacks import Callback
from lightning.pytorch.callbacks import Callback

__all__ = ["ModelParamsNoGrad"]

Expand Down
8 changes: 3 additions & 5 deletions mart/callbacks/progress_bar.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@

from typing import Any

import pytorch_lightning as pl
from pytorch_lightning.callbacks import TQDMProgressBar
from pytorch_lightning.utilities.rank_zero import rank_zero_only
from lightning import pytorch as pl
from lightning.pytorch.callbacks import TQDMProgressBar
from lightning.pytorch.utilities.rank_zero import rank_zero_only

__all__ = ["ProgressBar"]

Expand Down Expand Up @@ -41,8 +41,6 @@ def init_train_tqdm(self):
def on_train_epoch_start(self, trainer: pl.Trainer, *_: Any) -> None:
super().on_train_epoch_start(trainer)

# So that it does not display negative rate.
self.main_progress_bar.initial = 0
# So that it does not display Epoch n.
rank_id = rank_zero_only.rank
self.main_progress_bar.set_description(f"Attack@rank{rank_id}")
2 changes: 1 addition & 1 deletion mart/callbacks/visualizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

import os

from pytorch_lightning.callbacks import Callback
from lightning.pytorch.callbacks import Callback
from torchvision.transforms import ToPILImage

__all__ = ["PerturbedImageVisualizer"]
Expand Down
6 changes: 2 additions & 4 deletions mart/configs/callbacks/early_stopping.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.EarlyStopping.html
# https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.EarlyStopping.html

# Monitor a metric and stop training when it stops improving.
# Look at the above link for more detailed information.
early_stopping:
_target_: pytorch_lightning.callbacks.EarlyStopping
_target_: lightning.pytorch.callbacks.EarlyStopping
monitor: ??? # quantity to be monitored, must be specified !!!
min_delta: 0. # minimum change in the monitored quantity to qualify as an improvement
patience: 3 # number of checks with no improvement after which training will be stopped
Expand Down
2 changes: 1 addition & 1 deletion mart/configs/callbacks/lr_monitor.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
lr_monitor:
_target_: pytorch_lightning.callbacks.LearningRateMonitor
_target_: lightning.pytorch.callbacks.LearningRateMonitor
logging_interval: "step"
6 changes: 2 additions & 4 deletions mart/configs/callbacks/model_checkpoint.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.ModelCheckpoint.html
# https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.ModelCheckpoint.html

# Save the model periodically by monitoring a quantity.
# Look at the above link for more detailed information.
model_checkpoint:
_target_: pytorch_lightning.callbacks.ModelCheckpoint
_target_: lightning.pytorch.callbacks.ModelCheckpoint
dirpath: "${paths.output_dir}/checkpoints/" # directory to save the model file
filename: "epoch_{epoch:03d}" # checkpoint filename
monitor: ??? # name of the logged metric which determines when model is improving
Expand Down
6 changes: 2 additions & 4 deletions mart/configs/callbacks/model_summary.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichModelSummary.html
# https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.RichModelSummary.html

# Generates a summary of all layers in a LightningModule with rich text formatting.
# Look at the above link for more detailed information.
model_summary:
_target_: pytorch_lightning.callbacks.RichModelSummary
_target_: lightning.pytorch.callbacks.RichModelSummary
max_depth: 1 # the maximum depth of layer nesting that the summary will include
6 changes: 2 additions & 4 deletions mart/configs/callbacks/rich_progress_bar.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichProgressBar.html
# https://lightning.ai/docs/pytorch/latest/api/lightning.pytorch.callbacks.RichProgressBar.html

# Create a progress bar with rich text formatting.
# Look at the above link for more detailed information.
rich_progress_bar:
_target_: pytorch_lightning.callbacks.RichProgressBar
_target_: lightning.pytorch.callbacks.RichProgressBar
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,6 @@ test_dataset: ${.val_dataset}

num_workers: 4
collate_fn: null

# The accuracy metric may require this value.
num_classes: 10
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ defaults:
- default.yaml

train_dataset:
_target_: mart.datamodules.coco.CocoDetection
_target_: mart.data.coco.CocoDetection
root: ${paths.data_dir}/coco/train2017
annFile: ${paths.data_dir}/coco/annotations/instances_train2017.json
transforms:
Expand All @@ -24,7 +24,7 @@ train_dataset:
quant_max: 255

val_dataset:
_target_: mart.datamodules.coco.CocoDetection
_target_: mart.data.coco.CocoDetection
root: ${paths.data_dir}/coco/val2017
annFile: ${paths.data_dir}/coco/annotations/instances_val2017.json
transforms:
Expand All @@ -44,7 +44,7 @@ val_dataset:
quant_max: 255

test_dataset:
_target_: mart.datamodules.coco.CocoDetection
_target_: mart.data.coco.CocoDetection
root: ${paths.data_dir}/coco/val2017
annFile: ${paths.data_dir}/coco/annotations/instances_val2017.json
transforms:
Expand All @@ -66,4 +66,4 @@ test_dataset:
num_workers: 2
collate_fn:
_target_: hydra.utils.get_method
path: mart.datamodules.coco.collate_fn
path: mart.data.coco.collate_fn
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
_target_: mart.datamodules.LitDataModule
_target_: mart.data.LitDataModule
# _convert_: all

train_dataset: ???
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,3 +46,6 @@ val_dataset:
quant_max: 255

test_dataset: ${.val_dataset}

# The accuracy metric may require this value.
num_classes: 1000
2 changes: 1 addition & 1 deletion mart/configs/debug/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,6 @@ trainer:
devices: 1 # debuggers don't like multiprocessing
detect_anomaly: true # raise exception if NaN or +/-inf is detected in any tensor

datamodule:
data:
num_workers: 0 # debuggers don't like multiprocessing
pin_memory: False # disable gpu memory pin
6 changes: 3 additions & 3 deletions mart/configs/debug/profiler.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ defaults:
trainer:
max_epochs: 1
profiler:
_target_: pytorch_lightning.profiler.SimpleProfiler
# _target_: pytorch_lightning.profiler.AdvancedProfiler
# _target_: pytorch_lightning.profiler.PyTorchProfiler
_target_: lightning.pytorch.profiler.SimpleProfiler
# _target_: lightning.pytorch.profiler.AdvancedProfiler
# _target_: lightning.pytorch.profiler.PyTorchProfiler
dirpath: ${paths.output_dir}
filename: profiler_log
Loading