-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SolverTools revampt TODO list #3
Comments
Is there progress in implementing I can work on a PR if no one is working on this. Below is a draft of """
AbstractNLPSolver
Base type for an optimization solver.
"""
abstract type AbstractNLPSolver{T,S} end
"""
optimize!(solver)
Call optimization solver to solve the problem.
"""
function optimize! end
"""
get_execution_stats(solver)
Return the ExecutionStats of the solver.
"""
function get_execution_stats end
"""
get_nlp(solver)
Return the NLPModel of the solver.
"""
function get_nlp end
"""
get_elapsed_time(solver)
Return the elapsed time.
"""
function get_elapsed_time end
"""
get_status(solver)
Return the solver status.
"""
function get_status end
"""
get_objective(solver)
Return the objective value
"""
function get_objective end
"""
get_dual_feas(solver)
Return the dual feasibility.
"""
function get_dual_feas end
"""
get_primal_feas(solver)
Return the primal feasibility.
"""
function get_primal_feas end
"""
get_iter(solver)
Return the iteration count.
"""
function get_iter end
"""
get_solution(solver)
Return the solution.
"""
function get_solution end
"""
get_multiplier(solver)
Return the multiplier.
"""
function get_multipliers end
"""
get_multiplier_L(solver)
Return the lower bound multiplier.
"""
function get_multipliers_L end
"""
get_multiplier_U(solver)
Return the upper bound multiplier.
"""
function get_multipliers_U end
"""
set_solution!(solver, x)
Set the initial guess of solution.
"""
function set_solution!(solver::AbstractNLPSolver, x::AbstractVector)
copyto!(get_solution(solver), x)
end
"""
set_solution!(solver, x)
Set the initial guess of multiplier.
"""
function set_multipliers!(solver::AbstractNLPSolver, mult::AbstractVector)
copyto!(get_multipliers(solver), x)
end
"""
set_multiplier!(solver, x)
Set the initial guess of lower bound multiplier.
"""
function set_multipliers_L!(solver::AbstractNLPSolver, mult_L::AbstractVector)
copyto!(get_multipliers_L(solver), x)
end
"""
set_solution!(solver, x)
Set the initial guess of upper bound multiplier.
"""
function set_multipliers_U!(solver::AbstractNLPSolver, mult_U::AbstractVector)
copyto!(get_multipliers_U(solver), x)
end |
Thanks @sshin23 ! There's no Also, we separate the model from the solver and from the solver results. A solver object is created from an I would say cc-ing @tmigot @abelsiqueira @geoffroyleconte |
Our plan was to start from the bottom up, creating the solver structures for JSOSolvers (e.g. trunk) and Percival (JuliaSmoothOptimizers/Percival.jl#80) and then create an Our main underlying idea is to easily benchmark solvers with SolverBenchmark and for that, we have been assuming JSO-compliance (input: The missing thing for better efficient reuse is storing the Let me know your opinions on the plan. The development has slowed down a lot for me since I changed jobs in late 2021, so help is appreciated. |
Thanks, @dpo and @abelsiqueira for the fast feedback! Apologies for the delay on my side.
MadNLP.jl currently has the exact same structure. And I believe wrapper packages like The reason that I proposed separate And we can make So, a typical structure of a high-level interface would be like function ipopt(nlp::AbstractNLPModel; kwargs...)
solver = IpoptSolver(nlp, kwargs...)
SolverCore.optimize!(solver)
return SolverCore.GenericExecutionStats(solver)
end where we have this in function GenericExecutionStats(solver)
GenericExecutionStats(
get_status(solver),
get_nlp(solver),
solution = get_solution(solver),
objective = get_objective(solver),
dual_feas = get_dual_feas(solver),
iter = get_iter(solver),
primal_feas = get_primal_feas(solver),
elapsed_time = get_elapsed_time(solver),
multipliers = get_multipliers(solver),
multipliers_L = get_multipliers_L(solver),
multipliers_U = get_multipliers_U(solver),
solver_specific = get_solver_specific(solver)
)
end Feedbacks are welcome. I can work on PRs on |
Thanks @sshin23. Those are interesting suggestions. I see that there's room for improvement in our current system. In order to avoid storing data both in the solver and the stats object, I would suggest something along the lines of: mutable struct MySolver <: AbstractSolver
# ...
end
function solve!(solver::MySolver, model::AbstractNLPModels; kwargs...)
stats = GenericExecutionStats(:unknown, model)
solve!(solver, model, stats; kwargs...)
end
function solve!(solver::MySolver, model::AbstractNLPModels, stats::GenericExecutionStats; kwargs...)
# actual solve happens
# fill in stats object
end You could then call |
Thanks, @dpo for the feedback. That approach will resolve the issue of recreating One remaining question is how do we reset the initial guess of the problem. Currently, the solvers in If I want to resolve the problem with an updated initial guess (both primal and dual), should I update |
You can pass in x0: https://github.com/JuliaSmoothOptimizers/JSOSolvers.jl/blob/main/src/trunk.jl#L120 We should make sure all solvers allow that. |
Discussion is happening here: JuliaSmoothOptimizers/Organization#19
The text was updated successfully, but these errors were encountered: