A short summary of the paper #14
aSleepyTree
started this conversation in
General
Replies: 1 comment
-
@aSleepyTree Great summary! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Excellent work!
Here I write about my understanding of this paper, if there are any mistakes I hope you can point them out, thank you very much.
As shown in Figure 2, previous methods like RDDM and ResShift learn the path from$\hat X_0$ to $X_0$ ($X_t = X_0 + \bar \alpha_t X_{res} + \bar \beta_t \epsilon$ , where $\bar \alpha_t$ donates addition). Formally, they don't follow DDPM($X_t =\sqrt{\bar \alpha_t} X_0 + \bar \beta_t \epsilon$ , where $\bar \alpha_t$ donates multiplication).
This paper presents learning from$R$ to $X_0$ ($X_t =\sqrt{\bar \alpha_t} X_0 +(1-\sqrt{\bar \alpha_t})R + \bar \beta_t \epsilon$ , where $\bar \alpha_t$ donates multiplication). Some difficulties exist, such as $X_0$ being unknown in the reverse process (solution: Smooth equivalence transformation).
The most important point is that$\textbf{the form is consistent with DDPM, and almost all the benefits of the method are derived from it.}$
Beta Was this translation helpful? Give feedback.
All reactions