Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EfficientAD #1073

Merged
merged 121 commits into from
Jun 7, 2023
Merged
Show file tree
Hide file tree
Changes from 109 commits
Commits
Show all changes
121 commits
Select commit Hold shift + click to select a range
74902ba
reading augmentations from config
alexriedel1 Mar 29, 2023
222c9d8
missing if
alexriedel1 Mar 29, 2023
c7f35be
remove debug
alexriedel1 Mar 29, 2023
710a99c
pre-commit hooks
alexriedel1 Mar 29, 2023
e27f36b
solve memory cumulation
alexriedel1 Mar 31, 2023
c6510bf
Merge branch 'openvinotoolkit:main' into Augmentations-From-Config
alexriedel1 Apr 4, 2023
eeb956b
fixing imports
alexriedel1 Apr 4, 2023
4653e9a
uncomment pre commits
alexriedel1 Apr 4, 2023
ec368a7
EfficientAD first commit
alexriedel1 May 3, 2023
c3ed86d
fix pre commit
alexriedel1 May 3, 2023
2577133
Merge branch 'openvinotoolkit:main' into efficientad
alexriedel1 May 3, 2023
2b7366a
download imagenet subset
alexriedel1 May 3, 2023
8f87562
image size and channel out
alexriedel1 May 3, 2023
d535046
image size and channel out
alexriedel1 May 3, 2023
9a8d0d7
remove saved statistics
alexriedel1 May 3, 2023
dbdbe8d
autoencoder out channels
alexriedel1 May 4, 2023
2fd6c75
config image size
alexriedel1 May 4, 2023
c93d295
gdrive download path to string
alexriedel1 May 4, 2023
d414785
hash for imagenet subset
alexriedel1 May 4, 2023
21ec2ff
efficientad config
alexriedel1 May 4, 2023
e77898c
reload dataloader every epoch
alexriedel1 May 4, 2023
5a03372
calculate teacher channel mean only on train start
alexriedel1 May 4, 2023
8d5ca46
set channel and quantiles as parameter
alexriedel1 May 4, 2023
8c3f267
teacher normalization in forward
alexriedel1 May 4, 2023
657b967
quantiles tensor 0.0
alexriedel1 May 4, 2023
6f2fc47
on train end quantiles
alexriedel1 May 4, 2023
6b4bfa1
new mode name
alexriedel1 May 4, 2023
106ac22
on train end quantiles
alexriedel1 May 4, 2023
b86141e
eval after train
alexriedel1 May 4, 2023
8e0996d
70k steps
alexriedel1 May 4, 2023
feecb5b
save before validation
alexriedel1 May 4, 2023
6723533
manual get last epoch
alexriedel1 May 4, 2023
03029aa
set to train mode
alexriedel1 May 4, 2023
f6a3310
remove quantiles from step
alexriedel1 May 4, 2023
02c1167
on validation start calculate quantiles
alexriedel1 May 4, 2023
b6ed5a3
remove eval
alexriedel1 May 4, 2023
855464e
logger info
alexriedel1 May 4, 2023
5103a06
logger and no desc
alexriedel1 May 4, 2023
97f8d71
set to last epoch
alexriedel1 May 4, 2023
cffaaa3
training epoch end
alexriedel1 May 4, 2023
9f9e653
tqdm position
alexriedel1 May 4, 2023
2c915d4
training epoch end args
alexriedel1 May 4, 2023
92e1538
eval model
alexriedel1 May 4, 2023
57ca4e8
delete outputs
alexriedel1 May 4, 2023
ad84804
on train end
alexriedel1 May 5, 2023
e1ee023
on training epoch end
alexriedel1 May 5, 2023
9941240
check teacher model path
alexriedel1 May 5, 2023
f9141c8
other maps calculation?
alexriedel1 May 5, 2023
bc6fa3f
on train epoch end
alexriedel1 May 5, 2023
da4ae2b
remove print in tools
alexriedel1 May 5, 2023
4b5b821
on train epoch end
alexriedel1 May 5, 2023
30ebb29
doctstrings
alexriedel1 May 5, 2023
9d09d55
quantiles on validation start
alexriedel1 May 5, 2023
acc48a3
quantiles only on last epoch
alexriedel1 May 5, 2023
7a135a1
fix private
alexriedel1 May 5, 2023
99c065e
pre-commit
alexriedel1 May 5, 2023
2eb33bf
docstrings
alexriedel1 May 5, 2023
331ce8f
grad on teacher model, pin memory
alexriedel1 May 5, 2023
1642642
alternative teacher
alexriedel1 May 5, 2023
0283593
loss calculation
alexriedel1 May 5, 2023
8cde291
loss calculation
alexriedel1 May 5, 2023
a25f25a
moved imagenet normalition to models
alexriedel1 May 7, 2023
23d3391
pre commit hooks
alexriedel1 May 7, 2023
612d045
transform
alexriedel1 May 7, 2023
dc4a85e
transform line
alexriedel1 May 7, 2023
4b29b40
pre commit hooks
alexriedel1 May 7, 2023
3385413
new teacher, simpler architecture
alexriedel1 May 8, 2023
ae77bb0
map normalization with oinly good images
alexriedel1 May 8, 2023
a2ba9d6
pre-commit
alexriedel1 May 8, 2023
7c65061
loss weighting
alexriedel1 May 8, 2023
b27779b
all pretrained models
alexriedel1 May 8, 2023
50671c5
added small pretrained
alexriedel1 May 8, 2023
80728d2
lr scheduler
alexriedel1 May 8, 2023
9fc7055
pre commit
alexriedel1 May 8, 2023
f911e6d
remove loss weighting
alexriedel1 May 8, 2023
4e25795
200 epochs
alexriedel1 May 8, 2023
fdc7a8f
num steps
alexriedel1 May 8, 2023
e39138e
get training steps in on train start
alexriedel1 May 8, 2023
ba9ceb1
get nuim steps in optimizer
alexriedel1 May 8, 2023
e6afab8
loss weighting
alexriedel1 May 8, 2023
6f63ace
no loss weighting
alexriedel1 May 8, 2023
4bfae7d
remove normalization epsilon
alexriedel1 May 8, 2023
d71d871
torchvision transforms for imagenet
alexriedel1 May 8, 2023
3d4e667
albumentations imagenet
alexriedel1 May 8, 2023
17515bb
adamw
alexriedel1 May 9, 2023
949d27c
add classification label to segmentation task
alexriedel1 May 9, 2023
662c283
undo visualizer refactor
alexriedel1 May 9, 2023
5171ea9
refactor inference
alexriedel1 May 9, 2023
5fb10a7
pre commit hooks
alexriedel1 May 9, 2023
37584e0
compute distance_st only once
alexriedel1 May 9, 2023
5d4c820
torchvision imagenet
alexriedel1 May 9, 2023
f2375c3
albumentation resize method
alexriedel1 May 10, 2023
a0df3f9
best settings
alexriedel1 May 10, 2023
3eb9f81
access dict
alexriedel1 May 10, 2023
41acce5
ordering...
alexriedel1 May 10, 2023
b49df2c
model and padding true
alexriedel1 May 12, 2023
5e405ae
pre-commit
alexriedel1 May 12, 2023
3c6379c
add gdown
alexriedel1 May 12, 2023
d15764a
solve conflict
alexriedel1 May 12, 2023
974dd24
Merge branch 'main' into efficientad
alexriedel1 May 12, 2023
3a50f9b
list kwarg and random choice
alexriedel1 May 12, 2023
0652b8f
Merge branch 'efficientad' of https://github.com/alexriedel1/anomalib…
alexriedel1 May 12, 2023
8a79d0e
imagenette
alexriedel1 May 13, 2023
bb07250
imagenette hash
alexriedel1 May 13, 2023
0ce2c1b
validation set for feature map normalizatioN
alexriedel1 May 13, 2023
0a8fc27
remove gdown
alexriedel1 May 13, 2023
ce8c763
num epochs
alexriedel1 May 13, 2023
332462b
feature maps from train images
alexriedel1 May 14, 2023
3217048
minor fixes
alexriedel1 May 17, 2023
81789dd
Merge branch 'main' into efficientad
alexriedel1 May 17, 2023
c02cbb0
Merge branch 'main' into efficientad
alexriedel1 May 23, 2023
60483b2
new pretrained teachers
alexriedel1 May 25, 2023
042a92c
no teachers
alexriedel1 May 25, 2023
0d549ca
new teachers
alexriedel1 May 25, 2023
9326bf9
Merge branch 'efficientad' of https://github.com/alexriedel1/anomalib…
alexriedel1 May 25, 2023
b1f595d
Merge branch 'main' into efficientad
alexriedel1 May 29, 2023
44f0a7a
Merge branch 'main' into efficientad
alexriedel1 May 31, 2023
7772140
Merge branch 'efficientad' of https://github.com/alexriedel1/anomalib…
alexriedel1 Jun 1, 2023
5017e7c
remove weights, add weight download, add support for arbitrary image …
alexriedel1 Jun 1, 2023
b2e0e83
remove pretrained folder from git
alexriedel1 Jun 1, 2023
3025723
remove pretrained folder from git
alexriedel1 Jun 1, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 23 additions & 12 deletions src/anomalib/data/utils/download.py
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,28 @@ def hash_check(file_path: Path, expected_hash: str) -> None:
), f"Downloaded file {file_path} does not match the required hash."


def extract(file_name: Path, root: Path) -> None:
"""Extract a dataset

Args:
file_name (Path): Path of the file to be extracted.
root (Path): Root directory where the dataset will be stored.

"""
logger.info("Extracting dataset into root folder.")
if file_name.suffix == ".zip":
with ZipFile(file_name, "r") as zip_file:
zip_file.extractall(root)
elif file_name.suffix in (".tar", ".gz", ".xz", ".tgz"):
with tarfile.open(file_name) as tar_file:
safe_extract(tar_file, root)
else:
raise ValueError(f"Unrecognized file format: {file_name}")

logger.info("Cleaning up files.")
(file_name).unlink()


def download_and_extract(root: Path, info: DownloadInfo) -> None:
"""Download and extract a dataset.

Expand Down Expand Up @@ -246,18 +268,7 @@ def download_and_extract(root: Path, info: DownloadInfo) -> None:
logger.info("Checking the hash of the downloaded file.")
hash_check(downloaded_file_path, info.hash)

logger.info("Extracting dataset into root folder.")
if downloaded_file_path.suffix == ".zip":
with ZipFile(downloaded_file_path, "r") as zip_file:
zip_file.extractall(root)
elif downloaded_file_path.suffix in (".tar", ".gz", ".xz"):
with tarfile.open(downloaded_file_path) as tar_file:
safe_extract(tar_file, root)
else:
raise ValueError(f"Unrecognized file format: {downloaded_file_path}")

logger.info("Cleaning up files.")
(downloaded_file_path).unlink()
extract(downloaded_file_path, root)


def is_within_directory(directory: Path, target: Path):
Expand Down
3 changes: 3 additions & 0 deletions src/anomalib/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from anomalib.models.dfkde import Dfkde
from anomalib.models.dfm import Dfm
from anomalib.models.draem import Draem
from anomalib.models.efficientad import EfficientAD
from anomalib.models.fastflow import Fastflow
from anomalib.models.ganomaly import Ganomaly
from anomalib.models.padim import Padim
Expand All @@ -43,6 +44,7 @@
"Rkde",
"Stfpm",
"AiVad",
"EfficientAD",
]

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -95,6 +97,7 @@ def get_model(config: DictConfig | ListConfig) -> AnomalyModule:
"rkde",
"stfpm",
"ai_vad",
"efficientad",
]
model: AnomalyModule

Expand Down
37 changes: 37 additions & 0 deletions src/anomalib/models/efficientad/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# EfficientAD
This is the implementation of the [EfficientAD](https://arxiv.org/pdf/2303.14535.pdf) paper. It is based on https://github.com/rximg/EfficientAD and https://github.com/nelson1425/EfficientAD/

Model Type: Segmentation

## Description

Fast anomaly segmentation algorithm that consists of a distilled pre-trained teacher model, a student model and an autoencoder. It detects local anomalies via the teacher-student discrepany and global anomalies via the student-autoencoder discrepancy.

### Feature Extraction
Features are extracted from a pre-trained teacher model and used to train a student model and an autoencoder model. To hinder the student from imitating the teacher on anomalies, Imagenet images are used in the loss function.


### Anomaly Detection
Anomalies are detected as the difference in output feature maps between the student model and the autoencoder model.

## Usage

`python tools/train.py --model efficientad`

## Benchmark

All results gathered with seed `42`.

## [MVTec AD Dataset](https://www.mvtec.com/company/research/datasets/mvtec-ad)

### Image-Level AUC

| | Avg | Carpet | Grid | Leather | Tile | Wood | Bottle | Cable | Capsule | Hazelnut | Metal Nut | Pill | Screw | Toothbrush | Transistor | Zipper |
| ------------------------ | :---: | :----: | :---: | :-----: | :---: | :---: | :----: | :---: | :-----: | :------: | :-------: | :---: | :---: | :--------: | :--------: | :----: |
| Distilled Teacher Medium | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

### Image F1 Score

| | Avg | Carpet | Grid | Leather | Tile | Wood | Bottle | Cable | Capsule | Hazelnut | Metal Nut | Pill | Screw | Toothbrush | Transistor | Zipper |
| ------------------------ | :---: | :----: | :---: | :-----: | :---: | :---: | :----: | :---: | :-----: | :------: | :-------: | :---: | :---: | :--------: | :--------: | :----: |
| Distilled Teacher Medium | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
10 changes: 10 additions & 0 deletions src/anomalib/models/efficientad/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
"""EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies.
https://arxiv.org/pdf/2303.14535.pdf
"""

# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from .lightning_model import EfficientAD, EfficientadLightning

__all__ = ["EfficientAD", "EfficientadLightning"]
alexriedel1 marked this conversation as resolved.
Show resolved Hide resolved
101 changes: 101 additions & 0 deletions src/anomalib/models/efficientad/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
dataset:
name: mvtec
format: mvtec
path: ./datasets/MVTec
category: bottle
task: segmentation
train_batch_size: 1
eval_batch_size: 16
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: none # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)

model:
name: efficientad
teacher_out_channels: 384
model_size: medium # options: [small, medium]
lr: 0.0001
weight_decay: 0.00001
padding: true
# generic params
normalization_method: min_max # options: [null, min_max, cdf]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

normalization_method null because anomaly maps are already normalized by quantile normalization?


metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null

visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: False # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]

project:
seed: 42
path: ./results

logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.

optimization:
export_mode: null # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1 # Don't validate before extracting features.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Training of EfficientAD is steps instead of epochs, so validate every 1000 steps?

  check_val_every_n_epoch: null
  val_check_interval: 1000 # Validate every 1000 steps

fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 200
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Loading