Skip to content

Commit

Permalink
Address minor WinCLIP issues (#1889)
Browse files Browse the repository at this point in the history
* fix minor winclip issues

* update changelog
  • Loading branch information
djdameln authored Mar 21, 2024
1 parent 686cf84 commit a07d47e
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 6 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Fixed

- Use right interpolation method in WinCLIP resize (<https://github.com/openvinotoolkit/anomalib/pull/1889>)
- 🐞 Fix the error if the device in masks_to_boxes is not both CPU and CUDA by @danylo-boiko in https://github.com/openvinotoolkit/anomalib/pull/1839

## [v1.0.0] - 2024-02-29
Expand Down
6 changes: 2 additions & 4 deletions src/anomalib/models/image/winclip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,11 @@ WinCLIP is a zero-shot model, which means that we can directly evaluate the mode

### 0-Shot

`anomalib test --model WinClip --data MVTec --data.image_size 240 --data.normalization clip`
`anomalib test --model WinClip --data MVTec`

### 1-Shot

`anomalib test --model WinClip --model.k_shot 1 --data MVTec --data.image_size 240 --data.normalization clip`

> **Note:** The `data.image_size` and `data.normalization` parameters must be set to the above values to match the configuration in which the pre-trained CLIP model weights were obtained.
`anomalib test --model WinClip --model.k_shot 1 --data MVTec`

## Parameters

Expand Down
4 changes: 2 additions & 2 deletions src/anomalib/models/image/winclip/lightning_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

import torch
from torch.utils.data import DataLoader
from torchvision.transforms.v2 import Compose, Normalize, Resize, Transform
from torchvision.transforms.v2 import Compose, InterpolationMode, Normalize, Resize, Transform

from anomalib import LearningType
from anomalib.data.predict import PredictDataset
Expand Down Expand Up @@ -174,7 +174,7 @@ def configure_transforms(self, image_size: tuple[int, int] | None = None) -> Tra
logger.warning("Image size is not used in WinCLIP. The input image size is determined by the model.")
return Compose(
[
Resize((240, 240), antialias=True),
Resize((240, 240), antialias=True, interpolation=InterpolationMode.BICUBIC),
Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711)),
],
)

0 comments on commit a07d47e

Please sign in to comment.