-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative implementation of TV norm #2
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @bwohlberg , I love the simplicity and generalization of this implementation over arbitrary inputs. I was confused in 2 places where I've added comment. I'd appreciate further clarification there.
] | ||
) | ||
# single-level shift-invariant Haar transform | ||
self.W = VerticalStack((C0, C1), jit=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a clever way of doing addition and difference operations over arbitrary num. of dimensions. But does it also handle the shifting by 1 which we were doing with np.roll
operator in the earlier implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need it here because the shift-invariant version of the highest-resolution Haar decomposition is computed by convolution, without the downsampling that would follow for the standard (shift-variant) Haar decomposition.
|
||
Wv = self.W @ v | ||
# Apply 𝑙1 shrinkage to highpass component of shift-invariant Haar transform | ||
Wv = Wv.at[1].set(self.l1norm.prox(Wv[1], snp.sqrt(2) * K * lam)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how the proximal of the L1 Norm equivalent to the shrinkage operator we had defined in earlier implementation, which was taken the maximum over the difference from zero and the threshold?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to my reading of the paper, the recommended shrinkage function is the prox of the
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's correct. My question was more about the implementation of prox of the scico
. How is that equivalent to tmp = 0.5 * (tmp + snp.abs(tmp))
, equivalent to tmp = snp.max(tmp, 0)
? That's what I'm not seeing. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is rather opaque. It works because
OK to merge? |
Yes just verified that it gives the same result on our test example. I have a question about how convolution is being carried out rather effectively using the VerticalStack operators in scico, but we’re ok to merge. 😊 |
Alternative implementation of TV norm.