Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative implementation of TV norm #2

Merged
merged 9 commits into from
Nov 5, 2023
Merged

Conversation

bwohlberg
Copy link
Collaborator

Alternative implementation of TV norm.

@bwohlberg bwohlberg requested a review from shnaqvi November 3, 2023 17:55
Copy link
Owner

@shnaqvi shnaqvi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @bwohlberg , I love the simplicity and generalization of this implementation over arbitrary inputs. I was confused in 2 places where I've added comment. I'd appreciate further clarification there.

]
)
# single-level shift-invariant Haar transform
self.W = VerticalStack((C0, C1), jit=True)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a clever way of doing addition and difference operations over arbitrary num. of dimensions. But does it also handle the shifting by 1 which we were doing with np.roll operator in the earlier implementation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need it here because the shift-invariant version of the highest-resolution Haar decomposition is computed by convolution, without the downsampling that would follow for the standard (shift-variant) Haar decomposition.


Wv = self.W @ v
# Apply 𝑙1 shrinkage to highpass component of shift-invariant Haar transform
Wv = Wv.at[1].set(self.l1norm.prox(Wv[1], snp.sqrt(2) * K * lam))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how the proximal of the L1 Norm equivalent to the shrinkage operator we had defined in earlier implementation, which was taken the maximum over the difference from zero and the threshold?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to my reading of the paper, the recommended shrinkage function is the prox of the $\ell_1$ norm. (A more standard form would be to replace $y / |y|$ with $\text{sign}(y)$.)

Copy link
Owner

@shnaqvi shnaqvi Nov 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's correct. My question was more about the implementation of prox of the $\ell_1$ norm in scico. How is that equivalent to $\max \left( \left| {y} \right| - \lambda, 0 \right)\ \text{sign} \left( {y} \right) $? In particular, how is line 101 (tmp = 0.5 * (tmp + snp.abs(tmp)), equivalent to tmp = snp.max(tmp, 0)? That's what I'm not seeing. Thanks!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is rather opaque. It works because $(z + |z|)/2 = \text{max}(z, 0) = (z)_+$. If $z \geq 0$ then $|z| = z$ so $(z + |z|)/2 = z$, and if $z < 0$ then $|z| = -z$ so $(z + |z|)/2 = 0$.

@bwohlberg
Copy link
Collaborator Author

OK to merge?

@shnaqvi shnaqvi merged commit 2963523 into tv-norm Nov 5, 2023
@shnaqvi
Copy link
Owner

shnaqvi commented Nov 5, 2023

Yes just verified that it gives the same result on our test example. I have a question about how convolution is being carried out rather effectively using the VerticalStack operators in scico, but we’re ok to merge. 😊

@bwohlberg bwohlberg deleted the tv-norm-alt-ver branch November 5, 2023 17:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants