Skip to content

Commit

Permalink
fix & update doc (#639)
Browse files Browse the repository at this point in the history
  • Loading branch information
guimarqu authored Mar 22, 2022
1 parent 11d8930 commit cd8a3ed
Show file tree
Hide file tree
Showing 9 changed files with 319 additions and 176 deletions.
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
[compat]
Documenter = "~0.27"
MathOptInterface = "~0.9"
GLPK = "0.14.4"
GLPK = "0.14"
12 changes: 11 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,18 @@
using Documenter, Coluna, Literate, BlockDecomposition

TUTORIAL_GAP = joinpath(@__DIR__, "src", "start", "start.jl")
TUTORIAL_CUTS = joinpath(@__DIR__, "src", "start", "cuts.jl")
TUTORIAL_PRICING = joinpath(@__DIR__, "src", "start", "pricing.jl")
TUTORIAL_CALLBACKS = joinpath(@__DIR__, "src", "man", "callbacks.jl")

OUTPUT_GAP = joinpath(@__DIR__, "src", "start")
OUTPUT_CUTS = joinpath(@__DIR__, "src", "start")
OUTPUT_PRICING = joinpath(@__DIR__, "src", "start")
OUTPUT_CALLBACKS = joinpath(@__DIR__, "src", "man")

Literate.markdown(TUTORIAL_GAP, OUTPUT_GAP, documenter=true)
Literate.markdown(TUTORIAL_CUTS, OUTPUT_CUTS, documenter=true)
Literate.markdown(TUTORIAL_PRICING, OUTPUT_PRICING, documenter=true)
Literate.markdown(TUTORIAL_CALLBACKS, OUTPUT_CALLBACKS, documenter=true)

makedocs(
Expand All @@ -21,7 +27,11 @@ makedocs(
strict = false,
pages = Any[
"Introduction" => "index.md",
"Getting started" => joinpath("start", "start.md"),
"Getting started" => Any[
"Column generation" => joinpath("start", "start.md"),
"Valid inequalities" => joinpath("start", "cuts.md"),
"Pricing callback" => joinpath("start", "pricing.md"),
],
"Manual" => Any[
"Decomposition" => joinpath("man", "decomposition.md"),
"Configuration" => joinpath("man", "config.md"),
Expand Down
22 changes: 1 addition & 21 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Coluna offers a "black-box" implementation of the branch-and-cut-and-price algor

## Installation

Coluna is a package for [Julia 1.0+](https://docs.julialang.org/en/v1/manual/documentation/index.html)
Coluna is a package for [Julia 1.6+](https://docs.julialang.org/en/v1/manual/documentation/index.html).

It requires JuMP to model the problem, BlockDecomposition to define the decomposition,
and a [MIP solver supported by MathOptInterface](https://jump.dev/JuMP.jl/stable/installation/#Getting-Solvers-1) to optimize the master and the subproblems.
Expand All @@ -45,26 +45,6 @@ You can install Coluna and its dependencies through the package manager of Julia
] add Coluna
```

## Contributions

We welcome all contributions that help us to improve Coluna. You can suggest ways to enhance the package by opening an issue via the [GitHub issues tracker](https://github.com/atoptima/Coluna.jl/issues)

Once the suggestion is approved, you can open a Pull Request (PR) with the implementation of your suggestion.

Before requesting the review, make sure that your code follows the style guide and passes tests.

Do not forget to update the docstrings and the tests when necessary.
It is very important to keep clear the goal of the PR to make the review fast.
So we might close a PR that fixes two unrelated issues or more.

Coluna style follows the [blue style guide](https://github.com/invenia/BlueStyle) for Julia amended by the following instruction on naming :

> Names of variables and functions are treated equally. Use names that express what the variable/function does. > Either use :
> - `lowercasenospace` when the name is composed of three words or less with no ambiguity on words separation.
> - `snake_case` otherwise
Note that the application of the style guide is a work in progress.

## Extra dependances

- CPLEX.jl >= 0.7.0 (cf [issue 446](https://github.com/atoptima/Coluna.jl/issues/446))
Expand Down
105 changes: 1 addition & 104 deletions docs/src/man/callbacks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,110 +23,7 @@
# This callback is useful when you know an efficient algorithm to solve the subproblems,
# i.e. an algorithm better than solving the subproblem with a MIP solver.

# First, we load the packages and define aliases :

using Coluna, BlockDecomposition, JuMP, MathOptInterface, GLPK;
const BD = BlockDecomposition;
const MOI = MathOptInterface;

# Let us see an example with the following generalized assignment problem :

nb_machines = 4;
nb_jobs = 30;

c = [12.7 22.5 8.9 20.8 13.6 12.4 24.8 19.1 11.5 17.4 24.7 6.8 21.7 14.3 10.5 15.2 14.3 12.6 9.2 20.8 11.7 17.3 9.2 20.3 11.4 6.2 13.8 10.0 20.9 20.6; 19.1 24.8 24.4 23.6 16.1 20.6 15.0 9.5 7.9 11.3 22.6 8.0 21.5 14.7 23.2 19.7 19.5 7.2 6.4 23.2 8.1 13.6 24.6 15.6 22.3 8.8 19.1 18.4 22.9 8.0; 18.6 14.1 22.7 9.9 24.2 24.5 20.8 12.9 17.7 11.9 18.7 10.1 9.1 8.9 7.7 16.6 8.3 15.9 24.3 18.6 21.1 7.5 16.8 20.9 8.9 15.2 15.7 12.7 20.8 10.4; 13.1 16.2 16.8 16.7 9.0 16.9 17.9 12.1 17.5 22.0 19.9 14.6 18.2 19.6 24.2 12.9 11.3 7.5 6.5 11.3 7.8 13.8 20.7 16.8 23.6 19.1 16.8 19.3 12.5 11.0];
w = [61 70 57 82 51 74 98 64 86 80 69 79 60 76 78 71 50 99 92 83 53 91 68 61 63 97 91 77 68 80; 50 57 61 83 81 79 63 99 82 59 83 91 59 99 91 75 66 100 69 60 87 98 78 62 90 89 67 87 65 100; 91 81 66 63 59 81 87 90 65 55 57 68 92 91 86 74 80 89 95 57 55 96 77 60 55 57 56 67 81 52; 62 79 73 60 75 66 68 99 69 60 56 100 67 68 54 66 50 56 70 56 72 62 85 70 100 57 96 69 65 50];
Q = [1020 1460 1530 1190];

# with the following Coluna configuration

coluna = optimizer_with_attributes(
Coluna.Optimizer,
"params" => Coluna.Params(
solver = Coluna.Algorithm.TreeSearchAlgorithm() # default BCP
),
"default_optimizer" => GLPK.Optimizer # GLPK for the master & the subproblems
);

# for which the JuMP model takes the form:

model = BlockModel(coluna);

@axis(M, 1:nb_machines);
J = 1:nb_jobs;

@variable(model, x[m in M, j in J], Bin);
@constraint(model, cov[j in J], sum(x[m,j] for m in M) == 1);
@objective(model, Min, sum(c[m,j]*x[m,j] for m in M, j in J));
@dantzig_wolfe_decomposition(model, dwdec, M);

# where, as you can see, we omitted the knapsack constraints.
# These constraints are implicitly defined by the algorithm called in the pricing callback.

# Let's use the knapsack algorithms from the unregistered package
# [KnapsackLib](https://github.com/rafaelmartinelli/KnapsackLib.jl) to solve the
# knapsack subproblems.

using KnapsackLib;

# The pricing callback is a function.
# It takes as argument `cbdata` which is a data structure
# that allows the user to interact with Coluna within the pricing callback.

function my_pricing_callback(cbdata)
## Retrieve the index of the subproblem (it will be one of the values in M)
cur_machine = BD.callback_spid(cbdata, model)

## Retrieve reduced costs of subproblem variables
red_costs = [BD.callback_reduced_cost(cbdata, x[cur_machine, j]) for j in J]

## Keep jobs with negative reduced costs
selected_jobs = Int[]
selected_costs = Int[]
for j in J
if red_costs[j] <= 0.0
push!(selected_jobs, j)
## scale reduced costs to int
push!(selected_costs, -round(Int, 10000 * red_costs[j]))
end
end

if sum(w[cur_machine, selected_jobs]) <= Q[cur_machine]
## Don't need to run the algorithm,
## all jobs with negative reduced cost are assigned to the machine
jobs_assigned_to_cur_machine = selected_jobs
else
## Run the algorithm
items = [KnapItem(w,c) for (w,c) in zip(w[cur_machine, selected_jobs], selected_costs)]
data = KnapData(Q[cur_machine], items)

## Solve the knapsack with the algorithm from KnapsackLib
_, jobs_assigned_to_cur_machine = solveKnapExpCore(data)
end

## Create the solution (send only variables with non-zero values)
sol_vars = [x[cur_machine, j] for j in jobs_assigned_to_cur_machine]
sol_vals = [1.0 for _ in jobs_assigned_to_cur_machine]
sol_cost = sum(red_costs[j] for j in jobs_assigned_to_cur_machine)

## Submit the solution to the subproblem to Coluna
MOI.submit(model, BD.PricingSolution(cbdata), sol_cost, sol_vars, sol_vals)
return
end

# The pricing callback is provided to Coluna using the keyword `solver` in the method
# `specify!`.

subproblems = BD.getsubproblems(dwdec);
BD.specify!.(subproblems, lower_multiplicity = 0, solver = my_pricing_callback);

# You can then optimize :

optimize!(model);

# and retrieve information you need as usual :

getobjectivevalue(model)
# See the example in the manual [Pricing callback](@ref).

# # Separation callbacks

Expand Down
26 changes: 13 additions & 13 deletions docs/src/man/decomposition.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,7 @@ Subproblems take the following form (here, it's the first subproblem) :

where $\bar{c}$ is the reduced cost of the original variables computed by the column generation algorithm.

## Benders (alpha)

Let's consider the following coefficient matrix that has a block diagonal structure
in gray and some linking variables in blue :

![Benders decomposition](../static/bdec.png)

You fix the complicated variables, then you can solve the blocks
independently.

This decomposition is an alpha feature.

## Identical subproblems (alpha)
## Dantzig-Wolfe with identical subproblems (alpha)

When some subproblems are identical (same coefficient matrix and rhs),
you can avoid solving all of them at each iteration by defining only one subproblem and
Expand Down Expand Up @@ -170,6 +158,18 @@ for t in T
end
```

## Benders (alpha)

Let's consider the following coefficient matrix that has a block diagonal structure
in gray and some linking variables in blue :

![Benders decomposition](../static/bdec.png)

You fix the complicated variables, then you can solve the blocks
independently.

This decomposition is an alpha feature.

# BlockDecomposition

The index-set of the subproblems is declared through an [`BlockDecomposition.@axis`](@ref).
Expand Down
123 changes: 123 additions & 0 deletions docs/src/start/cuts.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# # Valid inequalities

# Now let us consider a variant of the Generalized Assignment Problem in which we have to
# pay `f[m]` to use machine `m`.

# Consider the following instance:

J = 1:15
M = 1:6
c = [10.13 15.6 15.54 13.41 17.08 17.78 16.36; 19.58 16.83 10.75 15.8 14.89 11.91 15.01; 14.23 17.36 16.05 14.49 18.96 13.58 12.52; 16.47 16.38 18.14 15.46 11.64 10.56 12.65; 17.87 18.25 13.12 19.16 16.33 11.28 17.71; 11.09 16.76 15.5 12.08 13.06 19.03 14.89; 15.19 13.86 16.08 19.47 15.79 13.05 11.06; 10.79 18.96 16.11 19.78 15.57 12.68 14.76; 12.03 19.03 16.01 14.46 12.77 19.57 19.83; 14.48 11.75 16.97 19.95 18.32 13.43 12.7; 10.01 10.8 16.22 16.69 17.26 11.73 13.55; 17.84 11.91 10.47 10.91 17.87 14.63 11.52; 17.93 11.24 12.54 10.48 18.69 19.58 17.67; 16.05 15.83 10.31 19.81 18.72 10.2 14.95; 17.1 15.53 12.34 18.34 13.64 12.3 12.79];
w = [5, 4, 5, 6, 8, 9, 5, 8, 10, 7, 7, 7, 7, 7, 8];
Q = [ 25, 24, 31, 28, 24, 30];
f = [105, 103, 109, 112, 100, 104];

# We define the dependencies:

using JuMP, BlockDecomposition, Coluna, GLPK;

# We parametrize the solver.
# We solve only the root node of the branch-and-bound tree and we use a column and cut
# generation algorithm to conquer (optimize) this node.

coluna = JuMP.optimizer_with_attributes(
Coluna.Optimizer,
"params" => Coluna.Params(
solver = Coluna.Algorithm.TreeSearchAlgorithm(
conqueralg = Coluna.Algorithm.ColCutGenConquer(
max_nb_cut_rounds = 20
),
branchingtreefile = "tree2.dot",
maxnumnodes = 1
)
),
"default_optimizer" => GLPK.Optimizer
);

# ## Column generation

# We write the model:

model = BlockModel(coluna; direct_model = true);
@axis(M_axis, M)
@variable(model, x[j in J, m in M_axis], Bin);
@variable(model, y[m in M_axis], Bin);
@constraint(model, setpartitioning[j in J], sum(x[j,m] for m in M_axis) == 1);
@constraint(model, knp[m in M_axis], sum(w[j]*x[j,m] for j in J) <= Q[m] * y[m]);
@objective(model, Min, sum(c[j,m] * x[j,m] for m in M_axis, j in J) + sum(f[m] * y[m] for m in M_axis));

@dantzig_wolfe_decomposition(model, dec, M_axis);
sp = getsubproblems(dec);
specify!.(sp, lower_multiplicity = 0);

# We optimize:

optimize!(model)

# The final dual bound is:

db1 = objective_bound(model)

# ## Strengthen with valid inequalities

# Let `H` be the set of configurations of open machines (`h[m] = 1` if machine m open; `0` otherwise)
# such that all jobs can be assigned : `sum(h'Q) >= sum(w)`
# i.e. the total capacity of the open machines must exceed the total weight of the jobs.

H = Vector{Int}[]
for h in digits.(1:(2^length(M) - 1), base=2, pad=length(M))
if sum(h'Q) >= sum(w)
push!(H, h)
end
end
H

# Let `ȳ` be the solution to the linear relaxation of the problem.
# Let us try to express `ȳ` as a linear expression of the configurations.
# If `ȳ ∈ conv H`, we can derive a cut because the optimal integer solution to the problem
# use one of the configuration of H.

# We need MathOptInterface to define the cut callback:

using MathOptInterface

# The separation algorithm looks for the non-negative coefficients `χ[k]`, `k = 1:length(H)`, :
# `max sum(χ[k] for k in 1:length(H))` such that `sum(χ[k]* h for (k,h) in enumerate(H)) <= ̄ȳ`.
# If the objective value is less than 1, we must add a cut.

# Since the separation algorithm is a linear program, strong dualities applies.
# So we seperate these cuts with the dual.

fc_sep_m = Model(GLPK.Optimizer)
@variable(fc_sep_m, ψ[m in M] >= 0) # one variable for each constraint
@constraint(fc_sep_m, config_dual[h in H], ψ'h >= 1) # one constraint for each χ[k]
MathOptInterface.set(fc_sep_m, MathOptInterface.Silent(), true)

# The objective is `min ȳ'ψ` = `sum(χ[k] for k in 1:length(H))`.
# Let `ψ*` be an optimal solution to the dual. If `ȳ'ψ* < 1`, then `ψ*'y >= 1` is a valid inequality.

function fenchel_cuts_separation(cbdata)
println("Fenchel cuts separation callback...")
ȳ = [callback_value(cbdata, y[m]) for m in M_axis]
@objective(fc_sep_m, Min, ȳ'ψ) # update objective
optimize!(fc_sep_m)
if objective_value(fc_sep_m) < 1
con = @build_constraint(value.(ψ)'y >= 1) # valid inequality.
MathOptInterface.submit(model, MathOptInterface.UserCut(cbdata), con)
end
end

MathOptInterface.set(model, MathOptInterface.UserCutCallback(), fenchel_cuts_separation);

# We optimize:

optimize!(model)

# Valid inequalities significantly improve the previous dual bound:

db2 = objective_bound(model)


db1


Loading

0 comments on commit cd8a3ed

Please sign in to comment.