Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add gradient cache in Optimizer #66

Merged
merged 10 commits into from
Feb 16, 2021
Merged

add gradient cache in Optimizer #66

merged 10 commits into from
Feb 16, 2021

Conversation

matbesancon
Copy link
Collaborator

Fixes #61

still failing because of #65

@matbesancon
Copy link
Collaborator Author

Closes #68

@codecov
Copy link

codecov bot commented Jan 31, 2021

Codecov Report

Merging #66 (7941882) into master (0a8781e) will decrease coverage by 3.09%.
The diff coverage is 77.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #66      +/-   ##
==========================================
- Coverage   88.33%   85.23%   -3.10%     
==========================================
  Files           4        4              
  Lines         300      359      +59     
==========================================
+ Hits          265      306      +41     
- Misses         35       53      +18     
Impacted Files Coverage Δ
src/utils.jl 75.00% <22.22%> (-19.00%) ⬇️
src/MOI_wrapper.jl 85.54% <91.66%> (+0.71%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0a8781e...7941882. Read the comment docs.

@matbesancon
Copy link
Collaborator Author

OSQP appears to have more inaccuracy / variations on the final result, that's why I switched to Ipopt

@blegat
Copy link
Member

blegat commented Feb 1, 2021

OSQP appears to have more inaccuracy / variations on the final result, that's why I switched to Ipopt

Yes, when I use it for the tests, I usually increase the accuracy, e.g.

"eps_abs" => 1e-8,
"eps_rel" => 1e-8,

@@ -73,6 +73,7 @@ mutable struct Optimizer{OT <: MOI.ModelLike} <: MOI.AbstractOptimizer
dual_optimal::Vector{Union{Vector{Float64}, Float64}} # refer - https://github.com/jump-dev/DiffOpt.jl/issues/21#issuecomment-651130485
var_idx::Vector{VI}
con_idx::Vector{CI}
gradient_cache::NamedTuple
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not using Union{Nothing, NamedTuple{...}} or even Union{Nothing, Cache} with an actual struct ? Currently, it's type unstable

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will need at least Union{Nothing, NamedTuple{T1}, NamedTuple{T2}} for the cache, since we have possible cache types depending on conic or not

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd still prefer struct Cache1 end; struct Cache2 end; Union{Nothing, Cache1, Cache2}, otherwise it's not so easy to follow what's going on.
Moreover, it seems to me that NamedTuple are used when you don't know a priori what is the struct so it's like creating new structs on-the-fly why here we know very well the struct at coding time so we should use structs as it's more readable.

@matbesancon
Copy link
Collaborator Author

bump here

@matbesancon matbesancon merged commit f2be16c into master Feb 16, 2021
@matbesancon matbesancon deleted the add-gradient-cache branch February 16, 2021 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Cache gradients
2 participants