-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
150 - Diff-scm #306
150 - Diff-scm #306
Conversation
Need some of the new flash transformers add-ons for running
…maly-detection' into 150-diffSCM Conflicts: tutorials/generative/classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb tutorials/generative/classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @SANCHES-Pedro ! I added a few comments regarding the reversed_step and the tutorial. Please, consider the suggested method to simplify the sampling of 2D slices in the tutorial (using monai transforms). It will be necessary to run the training for more epochs, but I hope it will make the code more concise. Besides that, please, consider using f-strings in the tutorial to be consistent along the code (in the print command and other parts), and please run the ./runtests.sh -f
and the ./runtests.sh --autofix
commands to check format and flake8
@@ -1889,4 +1889,4 @@ def forward( | |||
# 7. output block | |||
h = self.out(h) | |||
|
|||
return h | |||
return h |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, run ./runtests.sh --autofix
formating issues
predict_epsilon: flag to use when model predicts the samples directly instead of the noise, epsilon. | ||
generator: random number generator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguments not declared
model_output: torch.Tensor, | ||
timestep: int, | ||
sample: torch.Tensor, | ||
eta: float = 0.0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove eta since it is not used in reversed_step
# - pred_sample_direction -> "direction pointing to x_t" | ||
# - pred_post_sample -> "x_t+1" | ||
|
||
assert eta == 0, "eta must be 0 for reversed_step" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove eta since it is not used in the reversed_step
if self.prediction_type == "epsilon": | ||
pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) | ||
elif self.prediction_type == "sample": | ||
pred_original_sample = model_output | ||
elif self.prediction_type == "v_prediction": | ||
pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output | ||
# predict V | ||
model_output = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adopt new variable names used for DDIM.step (with pred_epsilon
), and use the fix for when prediction_type == "epsilon"
T = 500 # 500 | ||
L = int(T * 0.35) # 0.25 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove comments
T = 500 # 500 | ||
L = int(T * 0.35) # 0.25 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, consider using following pep8 when naming variable,i.e. lowercase, if possible, make easier to understand their usage by their name
# %% | ||
|
||
idx = 250 | ||
inputimg = total_val_slices[idx][0,...] # Pick an input slice of the validation set to be transformed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo inputting
# ### The image-to-image translation has two steps | ||
# | ||
# 1. Encoding the input image into a latent space with the reversed DDIM sampling scheme | ||
# 2. Sampling from the latent space using gradient guidance towards the desired class label y=1 (healthy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# 2. Sampling from the latent space using gradient guidance towards the desired class label y=1 (healthy) | |
# 2. Sampling from the latent space using gradient guidance towards the desired class label `y=1` (healthy) |
# 2. Sampling from the latent space using gradient guidance towards the desired class label y=1 (healthy) | ||
# | ||
# In order to sample using gradient guidance, we first need to encode the input image in noise by using the reversed DDIM sampling scheme. | ||
# We define the number of steps in the noising and denoising process by L. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# We define the number of steps in the noising and denoising process by L. | |
# We define the number of steps in the noising and denoising process by `L`. |
Hi @Warvito, I've done the modifications last week for this PR (just mentioning because I'm not sure if I've done the right github flagging to indicate that changes are done...) |
Hi @SANCHES-Pedro , I do not think it updated, would like to jump into a call to try fix it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Thanks again, Pedro!
Implementation of anomaly detection / weak supervised segmentation by doing image manipulation with diffusion models
Sanchez et al "What is Healthy? Generative Counterfactual Diffusion for Lesion Localization"
https://arxiv.org/abs/2207.12268
Issue 150