Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] various improvements #664

Merged
merged 12 commits into from
May 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ problem = minimize(sumsquares(A * x - b), [x >= 0])
solve!(problem, SCS.Optimizer)

# Check the status of the problem
problem.status # :Optimal, :Infeasible, :Unbounded etc.
problem.status

# Get the optimal value
problem.optval
Expand Down Expand Up @@ -102,21 +102,21 @@ settings:
tol_feas = 1.0e-08, tol_gap_abs = 1.0e-08, tol_gap_rel = 1.0e-08,
static reg : on, ϵ1 = 1.0e-08, ϵ2 = 4.9e-32
dynamic reg: on, ϵ = 1.0e-13, δ = 2.0e-07
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
max iter = 10, stop ratio = 5.0
equilibrate: on, min_scale = 1.0e-04, max_scale = 1.0e+04
max iter = 10

iter pcost dcost gap pres dres k/t μ step
iter pcost dcost gap pres dres k/t μ step
---------------------------------------------------------------------------------------------
0 0.0000e+00 4.4359e-01 4.44e-01 8.68e-01 8.16e-02 1.00e+00 1.00e+00 ------
1 2.2037e+00 2.6563e+00 2.05e-01 7.34e-02 6.03e-03 5.44e-01 1.01e-01 9.33e-01
2 2.5276e+00 2.6331e+00 4.17e-02 1.43e-02 1.26e-03 1.27e-01 2.26e-02 7.84e-01
3 2.6758e+00 2.7129e+00 1.39e-02 4.09e-03 3.42e-04 4.35e-02 6.00e-03 7.84e-01
4 2.7167e+00 2.7178e+00 3.90e-04 1.18e-04 9.82e-06 1.25e-03 1.72e-04 9.80e-01
5 2.7182e+00 2.7183e+00 9.60e-06 3.39e-06 2.82e-07 3.15e-05 4.95e-06 9.80e-01
6 2.7183e+00 2.7183e+00 1.92e-07 6.74e-08 5.62e-09 6.29e-07 9.84e-08 9.80e-01
7 2.7183e+00 2.7183e+00 4.70e-09 1.94e-09 1.61e-10 1.59e-08 2.83e-09 9.80e-01
0 0.0000e+00 4.4359e-01 4.44e-01 8.68e-01 8.16e-02 1.00e+00 1.00e+00 ------
1 2.2037e+00 2.6563e+00 2.05e-01 7.34e-02 6.03e-03 5.44e-01 1.01e-01 9.33e-01
2 2.5276e+00 2.6331e+00 4.17e-02 1.43e-02 1.26e-03 1.27e-01 2.26e-02 7.84e-01
3 2.6758e+00 2.7129e+00 1.39e-02 4.09e-03 3.42e-04 4.35e-02 6.00e-03 7.84e-01
4 2.7167e+00 2.7178e+00 3.90e-04 1.18e-04 9.82e-06 1.25e-03 1.72e-04 9.80e-01
5 2.7182e+00 2.7183e+00 9.60e-06 3.39e-06 2.82e-07 3.15e-05 4.95e-06 9.80e-01
6 2.7183e+00 2.7183e+00 1.92e-07 6.74e-08 5.62e-09 6.29e-07 9.84e-08 9.80e-01
7 2.7183e+00 2.7183e+00 4.70e-09 1.94e-09 1.61e-10 1.59e-08 2.83e-09 9.80e-01
---------------------------------------------------------------------------------------------
Terminated with status = solved
solve time = 941μs
Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ Documenter.makedocs(
"Home" => "index.md",
"introduction/installation.md",
"introduction/quick_tutorial.md",
"introduction/dcp.md",
"introduction/faq.md",
],
"Examples" => examples_nav,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ problem = minimize(
# Solve the problem by calling `solve!`
solve!(problem, SCS.Optimizer; silent_solver = true)

println("problem status is ", problem.status) # :Optimal, :Infeasible, :Unbounded etc.
println("problem status is ", problem.status)
println("optimal value is ", problem.optval)

#-
Expand Down
55 changes: 0 additions & 55 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,58 +47,3 @@ you know where to look for certain things.
* The **Developer docs** section contains information for people contributing to
Convex development. Don't worry about this section if you are using Convex to
formulate and solve problems as a user.

## Extended formulations and the DCP ruleset

Convex.jl works by transforming the problem (which possibly has nonsmooth,
nonlinear constructions like the nuclear norm, the log determinant, and so
forth—into) a linear optimization problem subject to conic constraints. This
reformulation often involves adding auxiliary variables, and is called an
"extended formulation," since the original problem has been extended with
additional variables. These formulations rely on the problem being modeled by
combining Convex.jl's "atoms" or primitives according to certain rules which
ensure convexity, called the
[disciplined convex programming (DCP) ruleset](http://cvxr.com/cvx/doc/dcp.html).
If these atoms are combined in a way that does not ensure convexity, the
extended formulations are often invalid. As a simple example, consider the problem

```julia
model = minimize(abs(x), x >= 1, x <= 2)
```

The optimum occurs at `x=1`, but let us imagine we want to solve this problem
via Convex.jl using a linear programming (LP) solver.

Since `abs` is a nonlinear function, we need to reformulate the problem to pass
it to the LP solver. We do this by introducing an auxiliary variable `t` and
instead solving:
```julia
model = minimize(t, x >= 1, x <= 2, t >= x, t >= -x)
```
That is, we add the constraints `t >= x` and `t >= -x`, and replace `abs(x)` by
`t`. Since we are minimizing over `t` and the smallest possible `t` satisfying
these constraints is the absolute value of `x`, we get the right answer. This
reformulation worked because we were minimizing `abs(x)`, and that is a valid
way to use the primitive `abs`.

If we were maximizing `abs`, Convex.jl would error with

> Problem not DCP compliant: objective is not DCP

Why? Well, let us consider the same reformulation for a maximization problem.
The original problem is:
```julia
model = maximize(abs(x), x >= 1, x <= 2)
```
and the maximum of 2, obtained at `x = 2`. If we do the same reformulation as
above, however, we arrive at the problem:
```julia
maximize(t, x >= 1, x <= 2, t >= x, t >= -x)
```
whose solution is infinity.

In other words, we got the wrong answer by using the reformulation, since the
extended formulation was only valid for a minimization problem. Convex.jl always
performs these reformulations, but they are only guaranteed to be valid when the
DCP ruleset is followed. Therefore, Convex.jl programmatically checks the
whether or not these rules were satisfied and errors if they were not.
88 changes: 88 additions & 0 deletions docs/src/introduction/dcp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Extended formulations and the DCP ruleset

Convex.jl works by transforming the problem (which possibly has nonsmooth,
nonlinear constructions like the nuclear norm, the log determinant, and so
forth—into) a linear optimization problem subject to conic constraints.

The transformed problem often involves adding auxiliary variables, and it is
called an "extended formulation," since the original problem has been extended
with additional variables.

Creating an extended formulation relies on the problem being modeled by
combining Convex.jl's "atoms" or primitives according to certain rules which
ensure convexity, called the
[disciplined convex programming (DCP) ruleset](http://cvxr.com/cvx/doc/dcp.html).
If these atoms are combined in a way that does not ensure convexity, the
extended formulations are often invalid.

## A valid formulation

As a simple example, consider the problem:
```@repl
using Convex, SCS
x = Variable();
model_min = minimize(abs(x), [x >= 1, x <= 2]);
solve!(model_min, SCS.Optimizer; silent_solver = true)
x.value
```

The optimum occurs at `x = 1`, but let us imagine we want to solve this problem
via Convex.jl using a linear programming (LP) solver.

Since `abs` is a nonlinear function, we need to reformulate the problem to pass
it to the LP solver. We do this by introducing an auxiliary variable `t` and
instead solving:
```@repl
using Convex, SCS
x = Variable();
t = Variable();
model_min_extended = minimize(t, [x >= 1, x <= 2, t >= x, t >= -x]);
solve!(model_min_extended, SCS.Optimizer; silent_solver = true)
x.value
```
That is, we add the constraints `t >= x` and `t >= -x`, and replace `abs(x)` by
`t`. Since we are minimizing over `t` and the smallest possible `t` satisfying
these constraints is the absolute value of `x`, we get the right answer. This
reformulation worked because we were minimizing `abs(x)`, and that is a valid
way to use the primitive `abs`.

## An invalid formulation

The reformulation of `abx(x)` works only if we are minimizing `t`.

Why? Well, let us consider the same reformulation for a maximization problem.
The original problem is:
```@repl
using Convex
x = Variable();
model_max = maximize(abs(x), [x >= 1, x <= 2])
```
This time, `problem is DCP` reports `false`. If we attempt to solve the problem,
an error is thrown:
```julia
julia> solve!(model_max, SCS.Optimizer; silent_solver = true)
┌ Warning: Problem not DCP compliant: objective is not DCP
└ @ Convex ~/.julia/dev/Convex/src/problems.jl:73
ERROR: DCPViolationError: Expression not DCP compliant. This either means that your problem is not convex, or that we could not prove it was convex using the rules of disciplined convex programming. For a list of supported operations, see https://jump.dev/Convex.jl/stable/operations/. For help writing your problem as a disciplined convex program, please post a reproducible example on https://jump.dev/forum.
Stacktrace:
[...]
```

The error is thrown because, if we do the same reformulation as before, we
arrive at the problem:
```@repl
using Convex, SCS
x = Variable();
t = Variable();
model_max_extended = maximize(t, [x >= 1, x <= 2, t >= x, t >= -x]);
solve!(model_max_extended, SCS.Optimizer; silent_solver = true)
```
whose solution is unbounded.

In other words, we can get the wrong answer by using the extended reformulation,
because the extended formulation was only valid for a minimization problem.

Convex.jl always creates the extended reformulation, but because they are only
guaranteed to be valid when the DCP ruleset is followed, Convex.jl will
programmatically check the whether or not these DCP rules were satisfied and
error if they were not.
48 changes: 25 additions & 23 deletions docs/src/introduction/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@

## Where can I get help?

For usage questions, please contact us via the
[Julia Discourse](https://discourse.julialang.org/c/domain/opt). If you're
running into bugs or have feature requests, please use the [GitHub Issue
Tracker](https://github.com/jump-dev/Convex.jl/issues).
For usage questions, please start a new post on the
[Julia Discourse](https://discourse.julialang.org/c/domain/opt).

If you have a reproducible example of a bug or if you have a feature request,
please open a [GitHub issue](https://github.com/jump-dev/Convex.jl/issues/new).

## How does Convex.jl differ from JuMP?

Convex.jl and JuMP are both modelling languages for mathematical programming
embedded in Julia, and both interface with solvers via MathOptInterface, so many
of the same solvers are available in both.
embedded in Julia, and both interface with solvers via
[MathOptInterface](https://github.com/jump-dev/MathOptInterface.jl).

Convex.jl converts problems to a standard conic form. This approach requires
(and certifies) that the problem is convex and DCP compliant, and guarantees
Expand All @@ -26,45 +27,46 @@ formulation.

For linear programming, the difference is more stylistic. JuMP's syntax is
scalar-based and similar to AMPL and GAMS making it easy and fast to create
constraints by indexing and summation (like `sum(x[i] for i in 1:numLocation)`).
constraints by indexing and summation (like `sum(x[i] for i in 1:n)`).

Convex.jl allows (and prioritizes) linear algebraic and functional constructions
(like `max(x,y) <= A*z`); indexing and summation are also supported in Convex.jl,
(like `max(x, y) <= A * z`); indexing and summation are also supported in Convex.jl,
but are somewhat slower than in JuMP.

JuMP also lets you efficiently solve a sequence of problems when new constraints
are added or when coefficients are modified, whereas Convex.jl parses the
problem again whenever the [solve!]{.title-ref} method is called.
problem again whenever the [solve!](@ref) method is called.

## Where can I learn more about Convex Optimization?

See the freely available book [Convex Optimization](http://web.stanford.edu/~boyd/cvxbook/)
by Boyd and Vandenberghe for general background on convex optimization. For help
understanding the rules of Disciplined Convex Programming, we recommend this
by Boyd and Vandenberghe for general background on convex optimization.

For help understanding the rules of Disciplined Convex Programming, see the
[DCP tutorial website](http://dcp.stanford.edu/).

## Are there similar packages available in other languages?

You might use [CVXPY](http://www.cvxpy.org) in Python, or [CVX](http://cvxr.com/)
in Matlab.
See [CVXPY](http://www.cvxpy.org) in Python and [CVX](http://cvxr.com/) in
Matlab.

## How does Convex.jl work?

For a detailed discussion of how Convex.jl works, see [our paper](http://www.arxiv.org/abs/1410.4821).

## How do I cite this package?

If you use Convex.jl for published work, we encourage you to cite the
software using the following BibTeX citation: :
If you use Convex.jl for published work, we encourage you to cite the software
using the following BibTeX citation:

```
@article{convexjl,
title = {Convex Optimization in {J}ulia},
author ={Udell, Madeleine and Mohan, Karanveer and Zeng, David and Hong, Jenny and Diamond, Steven and Boyd, Stephen},
year = {2014},
journal = {SC14 Workshop on High Performance Technical Computing in Dynamic Languages},
archivePrefix = "arXiv",
eprint = {1410.4821},
primaryClass = "math-oc",
}
title = {Convex Optimization in {J}ulia},
author ={Udell, Madeleine and Mohan, Karanveer and Zeng, David and Hong, Jenny and Diamond, Steven and Boyd, Stephen},
year = {2014},
journal = {SC14 Workshop on High Performance Technical Computing in Dynamic Languages},
archivePrefix = "arXiv",
eprint = {1410.4821},
primaryClass = "math-oc",
}
```
7 changes: 3 additions & 4 deletions docs/src/introduction/installation.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# Installation

Installing Convex.jl is a one step process. Open up Julia and type:
Install Convex.jl using the Julia package manager:
```julia
using Pkg
Pkg.update()
Pkg.add("Convex")
```

This does not install any solvers. If you don't have a solver installed
already, you will want to install a solver such as [SCS](https://github.com/jump-dev/SCS.jl)
This does not install any solvers. If you don't have a solver installed already,
you will want to install a solver such as [SCS](https://github.com/jump-dev/SCS.jl)
by running:
```julia
Pkg.add("SCS")
Expand Down
19 changes: 3 additions & 16 deletions docs/src/introduction/quick_tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,27 +15,14 @@ with variable $x\in \mathbf{R}^{n}$, and problem data
$A \in \mathbf{R}^{m \times n}$, $b \in \mathbf{R}^{m}$.

This problem can be solved in Convex.jl as follows:
```@example
# Make the Convex.jl module available
```@repl
using Convex, SCS

# Generate random problem data
m = 4; n = 5
A = randn(m, n); b = randn(m)

# Create a (column vector) variable of size n x 1.
x = Variable(n)

# The problem is to minimize ||Ax - b||^2 subject to x >= 0
# This can be done by: minimize(objective, constraints)
problem = minimize(sumsquares(A * x - b), [x >= 0])

# Solve the problem by calling solve!
solve!(problem, SCS.Optimizer; silent_solver = true)

# Check the status of the problem
problem.status # :Optimal, :Infeasible, :Unbounded etc.

# Get the optimum value
problem.status
problem.optval
x.value
```
Loading
Loading