-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
default stepsizes primal-dual solver #41
Conversation
I'm not sure why that is the case, but one general comment about the PR (which may or may not be related to the re-opening issue): It looks like you're opening the PR from the master branch of your fork. This is not ideal in general, since you want your local master branch to be in sync with the main repository's master branch; also, the PR contains 22 commits right at the start, which looks weird. The workflow you may want to use starts with the following setup:
This way you can pull master freely, and you will always get the hottest stuff from the main repository. When you need to work on something:
You'll see that the PR shows your commits from item 3. only (ideally a single commit at the beginning of the PR). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Nevermind the indentation issues, I'll do a pass over the whole code base with JuliaFormatter.
If you're fine with the change, we can merge it
end | ||
elseif theta == 0 && mu == 0.5 # PPDCA | ||
if beta_l == 0 || beta_f == 0 | ||
if nmL > par * max(beta_l, beta_f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still off by one?
@pylat Regarding testing the linearity of functions (or maybe affinity?) we could have a dedicated predicate in ProximalOperators that takes into account for all calculus rules. The only issue is, |
) | ||
|
||
if isa(h, Zero) # case h(Lx) equiv 0 | ||
L = R(0)*I | ||
y0 = zero(x0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed it won't affect the primal variable. When relaxation parameter lambda is not equal to 1, then we end up with updating u^+ = (1-lambda) u. So that is why I set it to zero. Otherwise we can just let it scale the duals throughout the iterations.
So this whole check does not really fix a bug, does it @pylat? Just trying to understand. It really just affects the dual variables, which one doesn’t care much about whenever h==0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without L = R(0)*I, it was a bug (it was using the default L= identity). With the default L, we had barx = prox_g( x- \gamma u - \gamma \nabla f(x)) where the dual variable was converging to zero. So it is incorrect without L = 0 but just happened to converge often.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it! It’s just that h(1x) = h(0x) for all x whenever h == 0, so the problems are equivalent in that case (but apparently the iterations are not)
Thanks Lorenzo, I will keep this in mind. PR closing was also probably automatic when I pushed into the master. |
It seems I have somehow closed the previous pull request. I couldn't find a way to reopen it, so I am recreating a pull request with the issues of the previous one fixed (hopefully).
Edit: replaces #40