You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first thanks for your decent job. I have an issue which is I cannot reproduce the results mentioned in the paper when training on MVTec and testing on ViSA, here are the training hyperparameters which were supposedly used for yielding the paper results.
Argument
Default Value
depth
9
n_ctx
12
t_n_ctx
4
feature_map_layer
[0, 1, 2, 3]
features_list
[6, 12, 18, 24]
epoch
15
learning_rate
0.001
batch_size
8
image_size
518
seed
111
And in the table below there is a comparison of the two models' mean over class performance on ViSA (what i get and what is reported in the paper).
Object Name
Pixel AUROC (%) Me
Pixel AUROC (%) Paper
Pixel AUPRO (%) Me
Pixel AUPRO (%) Paper
Image AUROC (%) Me
Image AUROC (%) Paper
Image AP (%) Me
Image AP (%) Paper
Mean
95.4
95.5
85.8
87.0
81.2
82.1
84.6
85.4
Is there something that i am missing?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered:
I have tried in the past days, however i cannot reach the same performance as the checkpoints stored in the repository. Can you please explain in more detail which hyperparameters to adjust to reach the performance?
I appreciate your time.
Hi, first thanks for your decent job. I have an issue which is I cannot reproduce the results mentioned in the paper when training on MVTec and testing on ViSA, here are the training hyperparameters which were supposedly used for yielding the paper results.
And in the table below there is a comparison of the two models' mean over class performance on ViSA (what i get and what is reported in the paper).
Is there something that i am missing?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered: