-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to release a Gurobi license? #268
Comments
You want something like: using JuMP, Gurobi
env = Gurobi.Env()
model = Model(with_optimizer(Gurobi.Optimizer, env))
# ...
Gurobi.free_env(env) |
Hmm, this looks like it should be fixed: |
Cool! Thanks for the quick answer! Your solution works perfectly well. The license is being correctly released.
Would you know why? The piece of code that I am playing with is: using JuMP, Gurobi, Test
const MOI = JuMP.MathOptInterface
"""
example_knapsack(; verbose = true)
Formulate and solve a simple knapsack problem:
max sum(p_j x_j)
st sum(w_j x_j) <= C
x binary
"""
function example_knapsack(; verbose=true)
profit = [5, 3, 2, 7, 4]
weight = [2, 8, 4, 2, 5]
capacity = 10
env = Gurobi.Env()
model = Model(with_optimizer(Gurobi.Optimizer, env))
@variable(model, x[1:5], Bin)
# Objective: maximize profit
@objective(model, Max, profit' * x)
# Constraint: can carry all
@constraint(model, weight' * x <= capacity)
# Solve problem using MIP solver
JuMP.optimize!(model)
if verbose
println("Objective is: ", JuMP.objective_value(model))
println("Solution is:")
for i in 1:5
print("x[$i] = ", JuMP.value(x[i]))
println(", p[$i]/w[$i] = ", profit[i] / weight[i])
end
end
@test JuMP.termination_status(model) == MOI.OPTIMAL
@test JuMP.primal_status(model) == MOI.FEASIBLE_POINT
@test JuMP.objective_value(model) == 16.0
Gurobi.free_env(env)
end Then, if I evaluate a few times the function example_knapsack, I get: for i in 1:3
println("Round $i")
println("-------")
example_knapsack(verbose=false)
println("-------")
end
BTW: I am using Julia 1.1.1, JuMP v0.19.1 and Gurobi v0.6.0 |
I usually make using JuMP, Gurobi, Test
const MOI = JuMP.MathOptInterface
const env = Gurobi.Env()
"""
example_knapsack(; verbose = true)
Formulate and solve a simple knapsack problem:
max sum(p_j x_j)
st sum(w_j x_j) <= C
x binary
"""
function example_knapsack(; verbose=true)
profit = [5, 3, 2, 7, 4]
weight = [2, 8, 4, 2, 5]
capacity = 10
model = Model(with_optimizer(Gurobi.Optimizer, env))
@variable(model, x[1:5], Bin)
# Objective: maximize profit
@objective(model, Max, profit' * x)
# Constraint: can carry all
@constraint(model, weight' * x <= capacity)
# Solve problem using MIP solver
JuMP.optimize!(model)
if verbose
println("Objective is: ", JuMP.objective_value(model))
println("Solution is:")
for i in 1:5
print("x[$i] = ", JuMP.value(x[i]))
println(", p[$i]/w[$i] = ", profit[i] / weight[i])
end
end
@test JuMP.termination_status(model) == MOI.OPTIMAL
@test JuMP.primal_status(model) == MOI.FEASIBLE_POINT
@test JuMP.objective_value(model) == 16.0
end The warnings are probably coming from Julia calling |
I am having a similar problem releasing a Gurobi license after a model has been optimized. The below code:
produces the following error trace:
|
So the issue is calling You really need something like import Gurobi
import JuMP
function solve_problem(env)
m = JuMP.Model(JuMP.with_optimizer(Gurobi.Optimizer, env))
JuMP.@variable(m, x >= 0)
JuMP.@objective(m, Min, x)
JuMP.optimize!(m)
end
function solve_and_cleanup()
env = Gurobi.Env()
solve_problem(env)
GC.gc()
Gurobi.free_env(env)
end
solve_and_cleanup() |
Thanks, that solves the problem! So if I understand this correctly, once the model and the optimizer become associated, there's no way to free the Gurobi env while the model is still in scope? Just for reference, I get the same error when I first define the Model without an optimizer and then create one during the
|
There is one subtlety. It's the Gurobi model in scope, not the JuMP model. The Gurobi model is the underlying C object which users should never really interact with. You could try: using JuMP, Gurobi
model = Model()
env = Gurobi.Env()
set_optimizer(model, with_optimizer(Gurobi.Optimizer, env))
optimize!(model)
MOIU.drop_optimizer(model)
GC.gc()
Gurobi.free_env(env) A better question is why you want to repeatedly detach and attach a |
Mostly a lack of familiarity with the JuMP ecosystem, to be honest. Using Pyomo, once a model is solved all of the variable values are automatically loaded into the Pyomo model and the Gurobi model is no longer needed, so it auto-terminates. This comes in handy in situations where there's a cost for keeping a Gurobi model alive, like when using their cloud. But now that I play around with |
Ah that makes sense then. In this case we would recommend that you build the JuMP model, solve, grab all you need and then discard, calling
Yes, once you call
We take a considerably different approach to Pyomo. The closest Pyomo has to JuMP is their "persistent interfaces." |
See my answer to #110 (comment) using JuMP, Gurobi
env = Gurobi.Env()
model = Model(() -> Gurobi.Optimizer(env))
set_silent(model)
@variable(model, x)
@objective(model, Min, x^2)
optimize!(model)
# To ensure `env` is finalized, you must also finalize any models using env.
# If you use `Gurobi.Optimizer`, the order of finalizing `.model` and `env`
# doesn't matter. If you use the C API directly, `.model` _MUST_ be
# finalized first.
finalize(backend(model).optimizer.model)
finalize(env) Or if in direct-mode: using JuMP, Gurobi
env = Gurobi.Env()
model = direct_model(Gurobi.Optimizer(env))
set_silent(model)
@variable(model, x)
@objective(model, Min, x^2)
optimize!(model)
# To ensure `env` is finalized, you must also finalize any models using env.
# If you use `Gurobi.Optimizer`, the order of finalizing `.model` and `env`
# doesn't matter. If you use the C API directly, `.model` _MUST_ be
# finalized first.
finalize(backend(model))
finalize(env) Note that you should only call |
When using a Gurobi token server, I often need to make sure to release my license after using it. With python I would run:
How would I achieve the same using JuMP?
The text was updated successfully, but these errors were encountered: