-
-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Macros should use add_variables & add_constraints rather than singular methods #1939
Comments
How do you think this should work in JuMP? |
This is easier to do for |
At some point we discussed that his could be implemented in LQOI, JuMP would stay the way it is and LQOI could do some caching. Some other interesting idea would be a mixed mode:
By the way, |
It looks likes constraints already branches for the "plural version" . Besides the names I don't see why that is actually neccisary. Even if you are only adding one contraint you can still use We could always add a non-homogenous fall back to |
No need for the speciallized You can see the benefit to the Benchmarking code:. using JuMP
using Clp
using GLPK
using BenchmarkTools
using Random
using SparseArrays
using Profile
function build_model(solver, sp_mat; opt_options::NamedTuple=NamedTuple())
model = Model(with_optimizer(solver.Optimizer; opt_options...))
# model = Model(with_optimizer(solver.Optimizer; opt_options...))
num_variables=size(sp_mat,2)
# display(Matrix(sp_mat))
vars=@variable(model,0<=vars[i=1:num_variables]<=0)
cons=@constraint(model,cons,sp_mat * vars .== 0)
return model
end
function build_model_direct(solver, sp_mat; opt_options::NamedTuple=NamedTuple())
model = JuMP.direct_model(solver.Optimizer( ;opt_options...))
# model = Model(with_optimizer(solver.Optimizer; opt_options...))
num_variables=size(sp_mat,2)
# display(Matrix(sp_mat))
vars=@variable(model,0<=vars[i=1:num_variables]<=0)
cons=@constraint(model,cons,sp_mat * vars .== 0)
return model
end
num_vars=10000
num_cons=10000
density=.001
rng=Random.MersenneTwister(1)
sp_mat=sprand(rng,num_cons,num_vars,density)
direct_build_clp = @benchmark model=build_model_direct(Clp,sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
build_clp = @benchmark model=build_model(Clp,$sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
opt_clp =@benchmark optimize!(model) setup=(model=build_model(Clp,$sp_mat,opt_options=(LogLevel=0,MaximumSeconds=10.0))) Master: julia> direct_build_clp = @benchmark model=build_model_direct(Clp,sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
BenchmarkTools.Trial:
memory estimate: 1.56 GiB
allocs estimate: 959800
--------------
minimum time: 2.868 s (6.70% GC)
median time: 3.095 s (7.84% GC)
mean time: 3.095 s (7.84% GC)
maximum time: 3.323 s (8.83% GC)
--------------
samples: 2
evals/sample: 1
julia> build_clp = @benchmark model=build_model(Clp,$sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
BenchmarkTools.Trial:
memory estimate: 30.32 MiB
allocs estimate: 407369
--------------
minimum time: 50.563 ms (10.68% GC)
median time: 68.254 ms (15.08% GC)
mean time: 71.712 ms (16.95% GC)
maximum time: 187.390 ms (57.66% GC)
--------------
samples: 70
evals/sample: 1
julia> opt_clp =@benchmark optimize!(model) setup=(model=build_model(Clp,$sp_mat,opt_options=(LogLevel=0,MaximumSeconds=10.0)))
BenchmarkTools.Trial:
memory estimate: 43.94 MiB
allocs estimate: 900321
--------------
minimum time: 2.318 s (1.52% GC)
median time: 2.323 s (1.85% GC)
mean time: 2.354 s (2.92% GC)
maximum time: 2.421 s (5.29% GC)
--------------
samples: 3
evals/sample: 1 With above PRs: julia> direct_build_clp = @benchmark model=build_model_direct(Clp,sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
BenchmarkTools.Trial:
memory estimate: 1.56 GiB
allocs estimate: 979800
--------------
minimum time: 3.035 s (9.45% GC)
median time: 3.035 s (8.12% GC)
mean time: 3.035 s (8.12% GC)
maximum time: 3.036 s (6.79% GC)
--------------
samples: 2
evals/sample: 1
julia> build_clp = @benchmark model=build_model(Clp,$sp_mat,opt_options=(LogLevel=1,MaximumSeconds=10.0))
BenchmarkTools.Trial:
memory estimate: 30.32 MiB
allocs estimate: 407369
--------------
minimum time: 53.355 ms (12.94% GC)
median time: 67.650 ms (15.54% GC)
mean time: 69.863 ms (17.19% GC)
maximum time: 160.476 ms (60.86% GC)
--------------
samples: 72
evals/sample: 1
julia> opt_clp =@benchmark optimize!(model) setup=(model=build_model(Clp,$sp_mat,opt_options=(LogLevel=0,MaximumSeconds=10.0)))
BenchmarkTools.Trial:
memory estimate: 32.56 MiB
allocs estimate: 780344
--------------
minimum time: 116.185 ms (15.59% GC)
median time: 163.528 ms (22.02% GC)
mean time: 170.163 ms (26.51% GC)
maximum time: 284.184 ms (54.23% GC)
--------------
samples: 18
evals/sample: 1 |
The specialized |
@joaquimg do you have a benchmark that would indicate how fast copy_to should be? Is there still a lot of low hanging fruit? |
Nope. About changing the macros, |
Since you are interested in the performance issues, you might want to look at this one: #1905 |
We could create an array of the function add_variables(model::JuMP.Model, variables::Vector{ScalarVariable}, names::Vector{String})
variable_indices = MOI.add_variables(model, length(variables))
# set bounds and names...
end |
jump-dev/MathOptInterface.jl#696 made it so that the default copy_to now uses This is the direct_model side of the same change that was made to MOI. |
Nice! I support this! |
See #2748 for an investigation into this. |
I would like to advocate that the
@constraint
and@variable
macro should be updated to useadd_constraints
andadd_variables
rather thanadd_constraints.(...)
andadd_variables.(...)
.The issue here ends up being with solvers where adding constraints one by one is very slow, which is particularly true with direct_model, on a solver like Clp. Which with the caching bridge & jump-dev/Clp.jl#57 is 40x faster in benchmarking than the direct_model benchmark.
The text was updated successfully, but these errors were encountered: