-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tutorial performing anomaly detection based on likelihoods from generative models #298
Add tutorial performing anomaly detection based on likelihoods from generative models #298
Conversation
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
…n-based-on-likelihoods-from-generative-models
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice tutorial :) Two minor comments
|
||
# %% [markdown] | ||
# ### Transformer Training | ||
# We will train the Transformer for 100 epochs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's set to 5 in the training loop
# %% [markdown] | ||
# ## Image-wise anomaly detection | ||
# | ||
# To verify the performance of the VQ-VAE + Transformerperforming unsupervised anomaly detection, we will use the images from the test set of the MedNIST dataset. We will consider images from the `HeadCT` class as in-distribution images. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing space in "VQ-VAE + Transformer performing"
Signed-off-by: Walter Hugo Lopez Pinaya <[email protected]>
Implements #112