-
-
Notifications
You must be signed in to change notification settings - Fork 401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dirty flag #2797
Comments
I could definitely see other custom attributes with similar behavior, adding information on top of the solution without modifying the existing model |
Can you provide an example where this is a deal-breaker? There are also other work-arounds, like calling I'm not in favor of removing this flag. Setting it resolves a serious pain-point for users: it's too easy for them to end up with incorrect solutions otherwise. |
With DiffOpt, you solve the optimize once and then you can provide derivatives by acting as a matrix product (or adjoint matrix product) and you might give product of several vector but you only want to solve once. Even if you want only one product,, you don't want to force the user to have to specify the gradient before calling optimize.
Having to use these workarounds for DiffOpt and all other use cases wouldn't be ideal.
I agree that it's a pain point. I also agree that we may not want to handle it at the level of all MOI wrapper as you argue in #2709 (comment) in case the user want to experience the interaction with the solver in DIRECT mode. |
I meant in code. What is the syntax? What are the alternatives?
What other use-cases are there? |
Here are codes: Another example could be feasibility interface. You set constraint attributes that give tolerance and which norm to use, then you can ask whether the solution is feasible. Then you can change the tolerance and ask again. Another example are conflicts. Suppose you could give attributes to parametrize conflict resolution. Setting these attributes would not require calling |
DiffOpt already has a JuMP interface, so why can't it just provide different helper methods for setting for Xi in 1:N
dy[Xi] = 1.0 # set
MOI.set(
model,
DiffOpt.ForwardInConstraint(),
cons,
MOI.Utilities.vectorize(dy .* b),
)
DiffOpt.forward(model)
dw = MOI.get.(model, DiffOpt.ForwardOutVariablePrimal(), w)
db = MOI.get(model, DiffOpt.ForwardOutVariablePrimal(), b)
push!(∇, norm(dw) + norm(db))
dy[Xi] = 0.0 # reset the change made above
end becomes function DiffOpt.set_forward_value(model, cons, dy, b)
MOI.set(
backend(model),
DiffOpt.ForwardInConstraint(),
index.(cons),
MOI.Utilities.vectorize(dy .* index.(b)),
)
return
end
function DiffOpt.forward_value(model, x)
return MOI.get(model, DiffOpt.ForwardOutVariablePrimal(), x)
end
for Xi in 1:N
dy[Xi] = 1.0 # set
set_forward_value(model, cons, dy, b)
DiffOpt.forward(model)
dw = forward_value(model, w)
db = forward_value(model, b)
push!(∇, norm(dw) + norm(db))
dy[Xi] = 0.0 # reset the change made above
end
This is going to be solver-dependent, so the user should use direct mode. Then we wouldn't need to change this. |
Only using direct mode would prevent adding a layer on top of DiffOpt. You would say that users still have to call |
This is a JuMP only solution. It doesn't matter what the layers are. All we need to do is skip these flags: Lines 1273 to 1303 in f5a7a85
The easiest solution is for DiffOpt to implement a different function to provide a nicer interface at the JuMP level. This doesn't affect users of MathOptInterface who don't use JuMP. |
This was fixed in DiffOpt: jump-dev/DiffOpt.jl#154 -- although not the way I suggested above. I'll re-iterate that I don't think we need to change JuMP; this can be solved by other packages. |
So can this be closed? |
The suggestion from today's call was to consider moving this to MOI. The full list of options are:
For 3 in particular, we discussed how there is no good way to know which attributes may invalidate a solution in any solver. It's a question of "does there exist a solver for which setting this attribute may invalidate the solution?" The only way to answer this question is to be defensive and say "yes" to all attributes, which is the current implementation in JuMP. We also discussed how JuMP extensions which want different behavior (e.g., DiffOpt) can overload the JuMP.may_invalidate_solution(::MOI.AnyAttribute) = true and then DiffOpt can define |
I'm in favor of 1.
Why does DiffOpt require this ability ? This is perfectly compatible with DiffOpt because of the following. Solvers that want to still allow user to be able to modify-then-query would have an optimizer attribute to allow that but it won't be the default. As the querying after modifying is ambiguous and error-prone it seems reasonable to throw an error by default and require the user to explicitly set an option for it to work. |
I'm not in favor of 1. It's too much of a breaking change to introduce prior to 1.0. That's a lot of work I want to avoid. Going with 1 means that we need a way of knowing which attributes invalidate and when. That's going to lead to complexity. I'm in favor of 2 or 4. Given that we have resolved the situation in diffopt, is there really a call to make changes in JuMP? The issue in PowerSimulations was a modify-then-query of getting and setting primal starts. That works for most solvers, but it isn't guaranteed to work for all. The current implementation is simple code-wise, and also simple to understand for users: do not modify then query in any situation. |
@jd-lara says we should add a warning that the model has been modified, not just that the optimizer hasn't been called. That involves modifying this function: Lines 1237 to 1249 in b7a84e5
|
The dirty flag added in #2709 removes some flexibility for the MOI backend as it assumes that setting an attributes always modifies the solution. However, for DiffOpt, you can set attributes that are the gradient and ask for forward and backward differentiation. And you can set a different gradient and do differentiation again, without the need to reoptimize.
I'm tempted to say that we should revert #2709 and rely on MOI model to implement
MOI.TerminationStatus
consistently. The consistency being check by MOI tests as we've done with the rest of the API.Another way to make this work is to add a function in MOI
does_setting_this_attributes_requires_a_resolve
so that DiffOpt can implement a method that returnsfalse
.The text was updated successfully, but these errors were encountered: