From da2fde07e8c34dece502e98168ea2a3d558515fb Mon Sep 17 00:00:00 2001 From: Mohamed Tarek Date: Fri, 15 Jul 2022 07:17:37 +0400 Subject: [PATCH] More docs and tests (#145) * improve docs side bar * refactor docs and tests and add more docs * add tobs in missing places * docs improvements * rethrow err * remove NonconvexPercival before installing NOMAD * update scroll --- README.md | 72 ++++++++++++++++++++-- docs/make.jl | 3 + docs/src/algorithms/algorithms.md | 88 ++++++++++++++++++++++++--- docs/src/algorithms/auglag.md | 2 +- docs/src/algorithms/hyperopt.md | 4 +- docs/src/algorithms/ipopt.md | 4 +- docs/src/algorithms/metaheuristics.md | 56 +++++++++++++++++ docs/src/algorithms/minlp.md | 10 ++- docs/src/algorithms/mma.md | 4 +- docs/src/algorithms/mts.md | 24 ++------ docs/src/algorithms/nlopt.md | 6 +- docs/src/algorithms/nomad.md | 40 ++++++++++++ docs/src/algorithms/sdp.md | 2 +- docs/src/algorithms/surrogate.md | 2 +- docs/src/algorithms/tobs.md | 84 +++++++++++++++++++++++++ docs/src/index.md | 8 +-- docs/src/problem/problem.md | 2 +- src/Nonconvex.jl | 10 ++- test/runtests.jl | 15 ++++- 19 files changed, 377 insertions(+), 59 deletions(-) create mode 100644 docs/src/algorithms/metaheuristics.md create mode 100644 docs/src/algorithms/nomad.md create mode 100644 docs/src/algorithms/tobs.md diff --git a/README.md b/README.md index c35cbb8b..bb87378d 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,67 @@ [![](https://img.shields.io/badge/docs-stable-blue.svg)](https://JuliaNonconvex.github.io/Nonconvex.jl/stable) [![](https://img.shields.io/badge/docs-dev-blue.svg)](https://JuliaNonconvex.github.io/Nonconvex.jl/dev) -`Nonconvex.jl` is an umbrella package over implementations and wrappers of a number of nonconvex constrained optimization algorithms and packages making use of automatic differentiation. Zero, first and second order methods are available. Nonlinear equality and inequality constraints as well as integer constraints are supported. A detailed description of all the algorithms and features available in `Nonconvex` can be found in the [documentation](https://JuliaNonconvex.github.io/Nonconvex.jl/stable). +`Nonconvex.jl` is an umbrella package over implementations and wrappers of a number of nonconvex constrained optimization algorithms and packages making use of automatic differentiation. Zero, first and second order methods are available. Nonlinear equality and inequality constraints as well as integer and nonlinear semidefinite constraints are supported. A detailed description of all the algorithms and features available in `Nonconvex` can be found in the [documentation](https://JuliaNonconvex.github.io/Nonconvex.jl/stable). + +## Algorithms + +A summary of all the algorithms available in `Nonconvex` through different packages is shown in the table below. Scroll right to see more columns and see a description of the columns below the table. + +| Algorithm name | Is meta-algorithm? | Algorithm package | Order | Finite bounds | Infinite bounds | Inequality constraints | Equality constraints | Semidefinite constraints | Integer variables | +| ------- | ----------- | ----- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | +| Method of moving asymptotes (MMA) | ❌ | `NonconvexMMA.jl` (pure Julia) or `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | +| Primal dual interior point method | ❌ | `Ipopt.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| DIviding RECTangles algorithm (DIRECT) | ❌ | `NLopt.jl` | 0 | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | +| Controlled random search (CRS) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Multi-Level Single-Linkage (MLSL) | Limited | `NLopt.jl` | Depends on sub-solver | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| StoGo | ❌ | `NLopt.jl` | 1 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | +| AGS | ❌ | `NLopt.jl` | 0 | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | +| Improved Stochastic Ranking Evolution Strategy (ISRES) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| ESCH | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| COBYLA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| BOBYQA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| NEWUOA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Principal AXIS (PRAXIS) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Nelder Mead | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Subplex | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| CCSAQ | ❌ | `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | +| SLSQP | ❌ | `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| TNewton | ❌ | `NLopt.jl` | 1 | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Shifted limited-memory variable-metric | ❌ | `NLopt.jl` | 1 | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Augmented Lagrangian in `NLopt` | Limited | `NLopt.jl` | Depends on sub-solver | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Augmented Lagrangian in `Percival` | ❌ | `Percival.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Multiple trajectory search | ❌ | `NonconvexSearch.jl` | 0 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | +| Branch and bound for mixed integer nonlinear programming | ❌ | `Juniper.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | +| Sequential polyhedral outer-approximations for mixed integer nonlinear programming | ❌ | `Pavito.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | +| Evolutionary centers algorithm (ECA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Differential evolution (DE) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Particle swarm optimization (PSO) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Artificial bee colony (ABC) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Gravitational search algorithm (GSA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Simulated annealing (SA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Whale optimization algorithm (WOA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Machine-coded compact genetic algorithm (MCCGA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Genetic algorithm (GA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Nonlinear optimization with the MADS algorithm (NOMAD) | ❌ | `NOMAD.jl` | 0 | ✅ | ✅ | ✅ | Limited | ❌ | ✅ | +| Topology optimization of binary structures (TOBS) | ❌ | `NonconvexTOBS.jl` | 1 | Binary | ❌ | ✅ | ❌ | ❌ | Binary | +| Hyperband | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Random search | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Latin hypercube search | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Surrogate assisted optimization | ✅ | `NonconvexBayesian.jl` | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Log barrier method for nonlinear semidefinite constraint handling | ✅ | `NonconvexSemidefinite.jl` | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | ✅ | Depends on sub-solver | + + +The following is an explanation of all the columns in the table: +- Algorithm name. This is the name of the algorithm and/or its acronym. Some algorithms have multiple variants implemented in their respective packages. When that's the case, the whole family of algorithms is mentioned only once. +- Is meta-algorithm? Some algorithms are meta-algorithms that call a sub-algorithm to do the optimization after transforming the problem. In this case, a lot of the properties of the meta-algorithm are inherited from the sub-algorithm. So if the sub-algorithm requires gradients or Hessians of functions in the model, the meta-algorithm will also require gradients and Hessians of functions in the model. Fields where the property of the meta-algorithm is inherited from the sub-solver are indicated using the "Depends on sub-solver" entry. Some algorithms in `NLopt` have a "Limited" meta-algorithm status because they can only be used to wrap algorithms from `NLopt`. +- Algorithm package. This is the Julia package that either implements the algorithm or calls it from another programming language. `Nonconvex` wraps all these packages using a consistent API while allowing each algorithm to be customized where possible and have its own set of options. +- Order. This is the order of the algorithm. Zero-order algorithms only require the evaluation of the objective and constraint functions, they don't require any gradients or Hessians of objective and constraint functions. First-order algorithms require both the value and gradients of objective and/or constraint functions. Second-order algorithms require the value, gradients and Hessians of objective and/or constraint functions. +- Finite bounds. This is true if the algorithm supports finite lower and upper bound constraints on the decision variables. One special case is the `TOBS` algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false. +- Infinite bounds. This is true if the algorithm supports unbounded decision variables either from below, above or both. +- Inequality constraints. This is true if the algorithm supports nonlinear inequality constraints. +- Equality constraints. This is true if the algorithm supports nonlinear equality constraints. Algorithms that only support linear equality constraints are given an entry of "Limited". +- Semidefinite constraints. This is true if the algorithm supports nonlinear semidefinite constraints. +- Integer variables. This is true if the algorithm supports integer/discrete/binary decision variables, not just continuous. One special case is the `TOBS` algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false. ## The `JuliaNonconvex` organization @@ -15,7 +75,7 @@ The `JuliaNonconvex` organization hosts a number of packages which are available | ------- | ----------- | ----- | -------- | | [Nonconvex.jl](https://github.com/mohamed82008/Nonconvex.jl) | Umbrella package for nonconvex optimization | [![Actions Status](https://github.com/JuliaNonconvex/Nonconvex.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/Nonconvex.jl/actions) | [![codecov](https://codecov.io/gh/JuliaNonconvex/Nonconvex.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/Nonconvex.jl) | | [NonconvexCore.jl](https://github.com/JuliaNonconvex/NonconvexCore.jl) | All the interface functions and structs | [![Build Status](https://github.com/JuliaNonconvex/NonconvexCore.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexCore.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexCore.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexCore.jl) | -| [NonconvexMMA.jl](https://github.com/JuliaNonconvex/NonconvexMMA.jl) | Method of moving asymptotes implementation | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMMA.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMMA.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl) | +| [NonconvexMMA.jl](https://github.com/JuliaNonconvex/NonconvexMMA.jl) | Method of moving asymptotes implementation in pure Julia | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMMA.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMMA.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl) | | [NonconvexIpopt.jl](https://github.com/JuliaNonconvex/NonconvexIpopt.jl) | [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexIpopt.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexIpopt.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexIpopt.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexIpopt.jl) | | [NonconvexNLopt.jl](https://github.com/JuliaNonconvex/NonconvexNLopt.jl) | [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexNLopt.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexNLopt.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexNLopt.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexNLopt.jl) | | [NonconvexPercival.jl](https://github.com/JuliaNonconvex/NonconvexPercival.jl) | [Percival.jl](https://github.com/JuliaSmoothOptimizers/Percival.jl) wrapper (an augmented Lagrangian algorithm implementation) | [![Build Status](https://github.com/JuliaNonconvex/NonconvexPercival.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexPercival.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexPercival.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexPercival.jl) | @@ -28,10 +88,12 @@ The `JuliaNonconvex` organization hosts a number of packages which are available | [NonconvexAugLagLab.jl](https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl) | Experimental augmented Lagrangian package | [![Build Status](https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexAugLagLab.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexAugLagLab.jl) | | [NonconvexUtils.jl](https://github.com/JuliaNonconvex/NonconvexUtils.jl) | Some utility functions for automatic differentiation, history tracing, implicit functions and more. | [![Build Status](https://github.com/JuliaNonconvex/NonconvexUtils.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexUtils.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexUtils.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexUtils.jl) | | [NonconvexTOBS.jl](https://github.com/JuliaNonconvex/NonconvexTOBS.jl) | Binary optimization algorithm called "topology optimization of binary structures" ([TOBS](https://www.sciencedirect.com/science/article/abs/pii/S0168874X17305619?via%3Dihub)) which was originally developed in the context of optimal distribution of material in mechanical components. | [![Build Status](https://github.com/JuliaNonconvex/NonconvexTOBS.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexTOBS.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexTOBS.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexTOBS.jl) | +| [NonconvexMetaheuristics.jl](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl) | Metaheuristic gradient-free optimization algorithms as implemented in [`Metaheuristics.jl`](https://github.com/jmejia8/Metaheuristics.jl). | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMetaheuristics.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMetaheuristics.jl) | +| [NonconvexNOMAD.jl](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl) | [NOMAD algorithm](https://dl.acm.org/doi/10.1145/1916461.1916468) as wrapped in the [`NOMAD.jl`](https://github.com/bbopt/NOMAD.jl). | [![Build Status](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexNOMAD.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexNOMAD.jl) | ## Design philosophy -Nonconvex.jl is a Julia package that implements and wraps a number of constrained nonlinear and mixed integer nonlinear programming solvers. There are 3 features of Nonconvex.jl compared to similar packages such as JuMP.jl and NLPModels.jl: +[`Nonconvex.jl`] is a Julia package that implements and wraps a number of constrained nonlinear and mixed integer nonlinear programming solvers. There are 3 focus points of `Nonconvex.jl` compared to similar packages such as `JuMP.jl` and `NLPModels.jl`: 1. Emphasis on a function-based API. Objectives and constraints are normal Julia functions. 2. The ability to nest algorithms to create more complicated algorithms. @@ -39,7 +101,7 @@ Nonconvex.jl is a Julia package that implements and wraps a number of constraine ## Installing Nonconvex -To install Nonconvex.jl, open a Julia REPL and type `]` to enter the package mode. Then run: +To install `Nonconvex.jl`, open a Julia REPL and type `]` to enter the package mode. Then run: ```julia add Nonconvex ``` @@ -51,7 +113,7 @@ using Pkg; Pkg.add("Nonconvex") ## Loading Nonconvex -To load and start using Nonconvex.jl, run: +To load and start using `Nonconvex.jl`, run: ```julia using Nonconvex ``` diff --git a/docs/make.jl b/docs/make.jl index 7224fcce..1d08d6a8 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -32,6 +32,9 @@ makedocs( "algorithms/surrogate.md", "algorithms/mts.md", "algorithms/sdp.md", + "algorithms/metaheuristics.md", + "algorithms/nomad.md", + "algorithms/tobs.md", ], "Optimization result" => "result.md" ], diff --git a/docs/src/algorithms/algorithms.md b/docs/src/algorithms/algorithms.md index b8d7419e..89498d9b 100644 --- a/docs/src/algorithms/algorithms.md +++ b/docs/src/algorithms/algorithms.md @@ -1,10 +1,82 @@ # Algorithms -- [Method of moving asymptotes (MMA)](mma.md) -- [Ipopt](ipopt.md) -- [NLopt](nlopt.md) -- [Augmented Lagrangian algorithm](auglag.md) -- [Mixed integer nonlinear programming](minlp.md) -- [Multi-start optimization](hyperopt.md) -- [Surrogate-assited Bayesian optimization](surrogate.md) -- [Multiple Trajectory Search](mts.md) +## Overview of algorithms + +A summary of all the algorithms available in `Nonconvex` through different packages is shown in the table below. Scroll right to see more columns and see a description of the columns below the table. + +| Algorithm name | Is meta-algorithm? | Algorithm package | Order | Finite bounds | Infinite bounds | Inequality constraints | Equality constraints | Semidefinite constraints | Integer variables | +| ------- | ----------- | ----- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | +| Method of moving asymptotes (MMA) | ❌ | `NonconvexMMA.jl` (pure Julia) or `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | +| Primal dual interior point method | ❌ | `Ipopt.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| DIviding RECTangles algorithm (DIRECT) | ❌ | `NLopt.jl` | 0 | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | +| Controlled random search (CRS) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Multi-Level Single-Linkage (MLSL) | Limited | `NLopt.jl` | Depends on sub-solver | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| StoGo | ❌ | `NLopt.jl` | 1 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | +| AGS | ❌ | `NLopt.jl` | 0 | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | +| Improved Stochastic Ranking Evolution Strategy (ISRES) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| ESCH | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| COBYLA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| BOBYQA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| NEWUOA | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Principal AXIS (PRAXIS) | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Nelder Mead | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Subplex | ❌ | `NLopt.jl` | 0 | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| CCSAQ | ❌ | `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | +| SLSQP | ❌ | `NLopt.jl` | 1 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| TNewton | ❌ | `NLopt.jl` | 1 | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Shifted limited-memory variable-metric | ❌ | `NLopt.jl` | 1 | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Augmented Lagrangian in `NLopt` | Limited | `NLopt.jl` | Depends on sub-solver | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Augmented Lagrangian in `Percival` | ❌ | `Percival.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Multiple trajectory search | ❌ | `NonconvexSearch.jl` | 0 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | +| Branch and bound for mixed integer nonlinear programming | ❌ | `Juniper.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | +| Sequential polyhedral outer-approximations for mixed integer nonlinear programming | ❌ | `Pavito.jl` | 1 or 2 | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | +| Evolutionary centers algorithm (ECA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Differential evolution (DE) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Particle swarm optimization (PSO) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Artificial bee colony (ABC) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Gravitational search algorithm (GSA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Simulated annealing (SA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Whale optimization algorithm (WOA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Machine-coded compact genetic algorithm (MCCGA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Genetic algorithm (GA) | ❌ | `Metaheuristics.jl` | 0 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Nonlinear optimization with the MADS algorithm (NOMAD) | ❌ | `NOMAD.jl` | 0 | ✅ | ✅ | ✅ | Limited | ❌ | ✅ | +| Topology optimization of binary structures (TOBS) | ❌ | `NonconvexTOBS.jl` | 1 | Binary | ❌ | ✅ | ❌ | ❌ | Binary | +| Hyperband | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Random search | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Latin hypercube search | ✅ | `Hyperopt.jl` | Depends on sub-solver | ✅ | ❌ | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Surrogate assisted optimization | ✅ | `NonconvexBayesian.jl` | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | +| Log barrier method for nonlinear semidefinite constraint handling | ✅ | `NonconvexSemidefinite.jl` | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | Depends on sub-solver | ✅ | Depends on sub-solver | + + +The following is an explanation of all the columns in the table: +- Algorithm name. This is the name of the algorithm and/or its acronym. Some algorithms have multiple variants implemented in their respective packages. When that's the case, the whole family of algorithms is mentioned only once. +- Is meta-algorithm? Some algorithms are meta-algorithms that call a sub-algorithm to do the optimization after transforming the problem. In this case, a lot of the properties of the meta-algorithm are inherited from the sub-algorithm. So if the sub-algorithm requires gradients or Hessians of functions in the model, the meta-algorithm will also require gradients and Hessians of functions in the model. Fields where the property of the meta-algorithm is inherited from the sub-solver are indicated using the "Depends on sub-solver" entry. Some algorithms in `NLopt` have a "Limited" meta-algorithm status because they can only be used to wrap algorithms from `NLopt`. +- Algorithm package. This is the Julia package that either implements the algorithm or calls it from another programming language. `Nonconvex` wraps all these packages using a consistent API while allowing each algorithm to be customized where possible and have its own set of options. +- Order. This is the order of the algorithm. Zero-order algorithms only require the evaluation of the objective and constraint functions, they don't require any gradients or Hessians of objective and constraint functions. First-order algorithms require both the value and gradients of objective and/or constraint functions. Second-order algorithms require the value, gradients and Hessians of objective and/or constraint functions. +- Finite bounds. This is true if the algorithm supports finite lower and upper bound constraints on the decision variables. One special case is the `TOBS` algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false. +- Infinite bounds. This is true if the algorithm supports unbounded decision variables either from below, above or both. +- Inequality constraints. This is true if the algorithm supports nonlinear inequality constraints. +- Equality constraints. This is true if the algorithm supports nonlinear equality constraints. Algorithms that only support linear equality constraints are given an entry of "Limited". +- Semidefinite constraints. This is true if the algorithm supports nonlinear semidefinite constraints. +- Integer variables. This is true if the algorithm supports integer/discrete/binary decision variables, not just continuous. One special case is the `TOBS` algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false. + +## Wrapper packages + +The `JuliaNonconvex` organization hosts a number of packages which wrap other optimization packages in Julia or implement their algorithms. The correct wrapper package is loaded using the `Nonconvex.@load` macro with the algorithm or package name. The following is a summary of all the wrapper packages in the `JuliaNonconvex` organization. To view the documentation of each package, click on the blue docs badge in the last column. + +| Package | Description | Tests | Coverage | Docs | +| ------- | ----------- | ----- | -------- | -------- | +| [NonconvexMMA.jl](https://github.com/JuliaNonconvex/NonconvexMMA.jl) | Method of moving asymptotes implementation in pure Julia | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMMA.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMMA.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMMA.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](mma.md) | +| [NonconvexIpopt.jl](https://github.com/JuliaNonconvex/NonconvexIpopt.jl) | [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexIpopt.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexIpopt.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexIpopt.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexIpopt.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](ipopt.md) | +| [NonconvexNLopt.jl](https://github.com/JuliaNonconvex/NonconvexNLopt.jl) | [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexNLopt.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexNLopt.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexNLopt.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexNLopt.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](nlopt.md) | +| [NonconvexPercival.jl](https://github.com/JuliaNonconvex/NonconvexPercival.jl) | [Percival.jl](https://github.com/JuliaSmoothOptimizers/Percival.jl) wrapper (an augmented Lagrangian algorithm implementation) | [![Build Status](https://github.com/JuliaNonconvex/NonconvexPercival.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexPercival.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexPercival.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexPercival.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](auglag.md) | +| [NonconvexJuniper.jl](https://github.com/JuliaNonconvex/NonconvexJuniper.jl) | [Juniper.jl](https://github.com/lanl-ansi/Juniper.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexJuniper.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexJuniper.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexJuniper.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexJuniper.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](minlp.md) | +| [NonconvexPavito.jl](https://github.com/JuliaNonconvex/NonconvexPavito.jl) | [Pavito.jl](https://github.com/jump-dev/Pavito.jl) wrapper | [![Build Status](https://github.com/JuliaNonconvex/NonconvexPavito.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexPavito.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexPavito.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexPavito.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](minlp.md) | +| [NonconvexSemidefinite.jl](https://github.com/JuliaNonconvex/NonconvexSemidefinite.jl) | Nonlinear semi-definite programming algorithm | [![Build Status](https://github.com/JuliaNonconvex/NonconvexSemidefinite.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexSemidefinite.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexSemidefinite.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexSemidefinite.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](sdp.md) | +| [NonconvexMultistart.jl](https://github.com/JuliaNonconvex/NonconvexMultistart.jl) | Multi-start optimization algorithms | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMultistart.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMultistart.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMultistart.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMultistart.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](hyperopt.md) | +| [NonconvexBayesian.jl](https://github.com/JuliaNonconvex/NonconvexBayesian.jl) | Constrained Bayesian optimization implementation | [![Build Status](https://github.com/JuliaNonconvex/NonconvexBayesian.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexBayesian.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexBayesian.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexBayesian.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](surrogate.md) | +| [NonconvexSearch.jl](https://github.com/JuliaNonconvex/NonconvexSearch.jl) | Multi-trajectory and local search methods | [![Build Status](https://github.com/JuliaNonconvex/NonconvexSearch.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexSearch.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexSearch.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexSearch.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](mts.md) | +| [NonconvexTOBS.jl](https://github.com/JuliaNonconvex/NonconvexTOBS.jl) | Binary optimization algorithm called "topology optimization of binary structures" ([TOBS](https://www.sciencedirect.com/science/article/abs/pii/S0168874X17305619?via%3Dihub)) which was originally developed in the context of optimal distribution of material in mechanical components. | [![Build Status](https://github.com/JuliaNonconvex/NonconvexTOBS.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexTOBS.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexTOBS.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexTOBS.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](tobs.md) | +| [NonconvexMetaheuristics.jl](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl) | Metaheuristic gradient-free optimization algorithms as implemented in [`Metaheuristics.jl`](https://github.com/jmejia8/Metaheuristics.jl). | [![Build Status](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexMetaheuristics.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexMetaheuristics.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexMetaheuristics.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](metaheuristics.md) | +| [NonconvexNOMAD.jl](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl) | [NOMAD algorithm](https://dl.acm.org/doi/10.1145/1916461.1916468) as wrapped in the [`NOMAD.jl`](https://github.com/bbopt/NOMAD.jl). | [![Build Status](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl/workflows/CI/badge.svg)](https://github.com/JuliaNonconvex/NonconvexNOMAD.jl/actions) | [![Coverage](https://codecov.io/gh/JuliaNonconvex/NonconvexNOMAD.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaNonconvex/NonconvexNOMAD.jl) | [![](https://img.shields.io/badge/docs-stable-blue.svg)](nomad.md) | + diff --git a/docs/src/algorithms/auglag.md b/docs/src/algorithms/auglag.md index bc53cf43..f893ed9d 100644 --- a/docs/src/algorithms/auglag.md +++ b/docs/src/algorithms/auglag.md @@ -1,4 +1,4 @@ -# Augmented Lagrangian algorithm +# Augmented Lagrangian algorithm in pure Julia ## Description diff --git a/docs/src/algorithms/hyperopt.md b/docs/src/algorithms/hyperopt.md index e6ede780..c8f7ecc5 100644 --- a/docs/src/algorithms/hyperopt.md +++ b/docs/src/algorithms/hyperopt.md @@ -1,8 +1,8 @@ -# Multi-start optimization +# Multi-start and hyper-parameter optimization in pure Julia ## Description -[Hyperopt.jl](https://github.com/baggepinnen/Hyperopt.jl) is a Julia library that implements a number of hyperparameter optimization algorithms which can be used to optimize the starting point of the optimization. +[Hyperopt.jl](https://github.com/baggepinnen/Hyperopt.jl) is a Julia library that implements a number of hyperparameter optimization algorithms which can be used to optimize the starting point of the optimization. `NonconvexHyperopt.jl` allows the use of the algorithms in `Hyperopt.jl` as meta-algorithms using the `HyperoptAlg` struct. ## Quick start diff --git a/docs/src/algorithms/ipopt.md b/docs/src/algorithms/ipopt.md index 4282506f..3e4785bd 100644 --- a/docs/src/algorithms/ipopt.md +++ b/docs/src/algorithms/ipopt.md @@ -1,8 +1,8 @@ -# Ipopt +# Interior point method using `Ipopt.jl` ## Description -[Ipopt](https://coin-or.github.io/Ipopt) is a well known interior point optimizer developed and maintained by COIN-OR. The Julia wrapper of Ipopt is [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl). Nonconvex allows the use of Ipopt.jl using the `IpoptAlg` algorithm struct. Ipopt can be used as a second order optimizer using the Hessian of the Lagrangian. Alternatively, an [l-BFGS approximation](https://en.wikipedia.org/wiki/Limited-memory_BFGS) of the Hessian can be used instead turning Ipopt into a first order optimizer tha only requires the gradient of the Lagrangian. +[Ipopt](https://coin-or.github.io/Ipopt) is a well known interior point optimizer developed and maintained by COIN-OR. The Julia wrapper of Ipopt is [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl). `Ipopt.jl` is wrapped in `NonconvexIpopt.jl`. `NonconvexIpopt` allows the use of `Ipopt.jl` using the `IpoptAlg` algorithm struct. `IpoptAlg` can be used as a second order optimizer computing the Hessian of the Lagrangian in every iteration. Alternatively, an [l-BFGS approximation](https://en.wikipedia.org/wiki/Limited-memory_BFGS) of the Hessian can be used instead turning `IpoptAlg` into a first order optimizer tha only requires the gradient of the Lagrangian. ## Quick start diff --git a/docs/src/algorithms/metaheuristics.md b/docs/src/algorithms/metaheuristics.md new file mode 100644 index 00000000..b9443594 --- /dev/null +++ b/docs/src/algorithms/metaheuristics.md @@ -0,0 +1,56 @@ +# A collection of meta-heuristic algorithms in pure Julia + +## Description + +[Metaheuristics.jl](https://github.com/jmejia8/Metaheuristics.jl) is an optimization library with a collection of [metaheuristic optimization algorithms](https://en.wikipedia.org/wiki/Metaheuristic) implemented. `NonconvexMetaheuristics.jl` allows the use of all the algorithms in the `Metaheuristics.jl` using the `MetaheuristicsAlg` struct. + +The main advantage of metaheuristic algorithms is that they don't require the objective and constraint functions to be differentiable. One advantage of the `Metaheuristics.jl` package compared to other black-box optimization or metaheuristic algorithm packages is that a large number of the algorithms implemented in `Metaheuristics.jl` support bounds, inequality and equality constraints using constraint handling techniques for metaheuristic algorithms. + +## Supported algorithms + +`Nonconvex.jl` only supports the single objective optimization algorithms in `Metaheuristics.jl`. The following algorithms are supported: +- Evolutionary Centers Algorithm (`ECA`) +- Differential Evolution (`DE`) +- Differential Evolution (`PSO`) +- Artificial Bee Colony (`ABC`) +- Gravitational Search Algorithm (`CGSA`) +- Simulated Annealing (`SA`) +- Whale Optimization Algorithm (`WOA`) +- Machine-coded Compact Genetic Algorithm (`MCCGA`) +- Genetic Algorithm (`GA`) + +For a summary of the strengths and weaknesses of each algorithm above, please refer to the table in the [algorithms page](https://jmejia8.github.io/Metaheuristics.jl/dev/algorithms/) in the `Metaheuristics` documentation. To define a `Metaheuristics` algorithm, you can use the `MetaheuristicsAlg` algorithm struct which wraps one of the above algorithm types, e.g. `MetaheuristicsAlg(ECA)` or `MetaheuristicsAlg(DE)`. + +## Quick start + +Given a model `model` and an initial solution `x0`, the following can be used to optimize the model using `Metaheuristics`. +```julia +using Nonconvex +Nonconvex.@load Metaheuristics + +alg = MetaheuristicsAlg(ECA) +options = MetaheuristicsOptions(N = 1000) # population size +result = optimize(model, alg, x0, options = options) +``` +`Metaheuristics` is an optional dependency of Nonconvex so you need to load the package to be able to use it. + +## Options + +The options keyword argument to the `optimize` function shown above must be an instance of the `MetaheuristicsOptions` struct when the algorihm is a `MetaheuristicsAlg`. To specify options use keyword arguments in the constructor of `MetaheuristicsOptions`, e.g: +```julia +options = MetaheuristicsOptions(N = 1000) +``` +All the other options that can be set for each algorithm can be found in the [algorithms section](https://jmejia8.github.io/Metaheuristics.jl/dev/algorithms/) of the documentation of `Metaheuristics.jl`. Note that one notable difference between using `Metaheuristics` directly and using it through `Nonconvex` is that in `Nonconvex`, all the options must be passed in through the `options` struct and only the algorithm type is part of the `alg` struct. + +## Variable bounds + +When using `Metaheuristics` algorithms, finite variables bounds are necessary. This is because the initial population is sampled randomly in the finite interval of each variable. Use of `Inf` as an upper bound or `-Inf` is therefore not acceptable. + +## Initialization + +Most metaheuristic algorithms are population algorithms which can accept multiple initial solutions to be part of the initial population. In `Nonconvex`, you can specify multiple initial solutions by making `x0` a vector of solutions. However, since `Nonconvex` models support arbitrary collections as decision variables, you must specify that the `x0` passed in is indeed a population of solutions rather than a single solution that's a vector of vectors for instance. To specify that `x0` is a vector of solutions, you can set the `multiple_initial_solutions` option to `true` in the `options` struct, e.g: +```julia +options = MetaheuristicsOptions(N = 1000, multiple_initial_solutions = true) +x0 = [[1.0, 1.0], [0.0, 0.0]] +``` +When fewer solutions are passed in `x0` compared to the population size, random initial solutions between the lower and upper bounds are sampled to complete the initial population. diff --git a/docs/src/algorithms/minlp.md b/docs/src/algorithms/minlp.md index d652a86c..63dee5b8 100644 --- a/docs/src/algorithms/minlp.md +++ b/docs/src/algorithms/minlp.md @@ -1,12 +1,10 @@ -# Mixed integer nonlinear programming (MINLP) +# First and second order mixed integer nonlinear programming algorithms ## Description -There are 2 MINLP solvers available in Nonconvex: -1. [Juniper.jl](https://github.com/lanl-ansi/Juniper.jl) with [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) as a sub-solver. -2. [Pavito.jl](https://github.com/jump-dev/Pavito.jl) with [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) and [Cbc.jl](https://github.com/jump-dev/Cbc.jl) as sub-solvers. - -These rely on local nonlinear programming solvers and a branch and bound procedure to find a locally optimal solution that satisfies the integerality constraints. +There are 2 first and second order MINLP solvers available in `Nonconvex`: +1. [Juniper.jl](https://github.com/lanl-ansi/Juniper.jl) with [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) as a sub-solver. `NonconvexJuniper.jl` allows the use of the branch and bound algorithm in `Juniper.jl` using the `JuniperIpoptAlg` struct. +2. [Pavito.jl](https://github.com/jump-dev/Pavito.jl) with [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl) and [Cbc.jl](https://github.com/jump-dev/Cbc.jl) as sub-solvers. `NonconvexPavito.jl` allows the use of the sequential polyhedral outer-approximations algorithm in `Pavito.jl` using the `PavitoIpoptCbcAlg` struct. ## Juniper + Ipopt diff --git a/docs/src/algorithms/mma.md b/docs/src/algorithms/mma.md index 442ffa46..b0294612 100644 --- a/docs/src/algorithms/mma.md +++ b/docs/src/algorithms/mma.md @@ -1,8 +1,8 @@ -# Method of moving asymptotes (MMA) +# Method of moving asymptotes in pure Julia ## Description -There are 2 versions of MMA that are available in Nonconvex.jl: +There are 2 versions of the method of moving asymptotes (MMA) that are available in `NonconvexMMA.jl`: 1. The original MMA algorithm from the [1987 paper](https://onlinelibrary.wiley.com/doi/abs/10.1002/nme.1620240207). 2. The globally convergent MMA (GCMMA) algorithm from the [2002 paper](https://epubs.siam.org/doi/abs/10.1137/S1052623499362822). diff --git a/docs/src/algorithms/mts.md b/docs/src/algorithms/mts.md index 0c016b31..f14f872c 100644 --- a/docs/src/algorithms/mts.md +++ b/docs/src/algorithms/mts.md @@ -1,9 +1,9 @@ -# Multiple Trajectory Search (MTS) +# Multi-trajectory search algorithm in pure Julia ## Description -MTS: Multiple Trajectory Search for Large-Scale Global Optimization, is a derivative-free heuristic optimization method presented in paper [Lin-Yu Tseng and Chun Chen, 2008](https://sci2s.ugr.es/sites/default/files/files/TematicWebSites/EAMHCO/contributionsCEC08/tseng08mts.pdf). -The main algorihtm `MTS` contains three subroutines `localsearch1`, `localsearch2` and `localsearch3`. This module implements all the optimization methods in the paper. People often use the entire `MTS` or only `localsearch1` to optimize functions, while `localsearch2` or `localsearch3` would rarely be used independently. Therefore, the module only exports `MTS` and `localsearch1`. +Multiple trajectory search (MTS) is a derivative-free heuristic optimization method presented by [Lin-Yu Tseng and Chun Chen, 2008](https://sci2s.ugr.es/sites/default/files/files/TematicWebSites/EAMHCO/contributionsCEC08/tseng08mts.pdf). +The `MTS` algorithm is implemented in the `NonconvexSearch.jl` package. This module implements all the optimization methods in the paper. ## Quick start @@ -13,27 +13,11 @@ Using default `MTSOptions()`. `MTS` is used for optimization. using Nonconvex Nonconvex.@load MTS -alg = MTSAlg() # Or LS1Alg() +alg = MTSAlg() LS1_options = MTSOptions() m = Model(f) lb = [0, 0] ub = [5, 5] -# Must have a box constraint. And (in)equality constraints are not supported for MTS methods. addvar!(m, lb, ub) result = optimize(model, alg, x0, options = options) ``` - -## Options - -You can choose which algorithm to use by specifying `option.method`. Avaliable list is `[MTS (default), localsearch1, Nonconvex.localsearch2 (not recommended), Nonconvex.localsearch3 (not recommended)]`. - -```julia -alg = MTSAlg() # Or LS1Alg() -LS1_options = MTSOptions(method=localsearch1) -m = Model(f)) -lb = [0, 0] -ub = [5, 5] -# Must have a box constraint. And (in)equality constraints are not supported in MTS methods. -addvar!(m, lb, ub) -result = optimize(model, alg, x0, options = options -``` diff --git a/docs/src/algorithms/nlopt.md b/docs/src/algorithms/nlopt.md index 9beaba52..b280acbc 100644 --- a/docs/src/algorithms/nlopt.md +++ b/docs/src/algorithms/nlopt.md @@ -1,8 +1,8 @@ -# NLopt +# Various optimization algorithms from `NLopt.jl` ## Description -[NLopt](https://github.com/stevengj/nlopt) is an optimization library with a collection of optimization algorithms implemented. Different algorithms have different limitations. To see the limitations of each algorithm, check the [algorithms section](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/) of the documentation of NLopt. [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) is the Julia wrapper of NLopt. Nonconvex allows the use of NLopt.jl using the `NLoptAlg` algorithm struct. +[NLopt](https://github.com/stevengj/nlopt) is an optimization library with a collection of optimization algorithms implemented. [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) is the Julia wrapper of `NLopt`. `NonconvexNLopt` allows the use of `NLopt.jl` using the `NLoptAlg` algorithm struct. ## Quick start @@ -64,7 +64,7 @@ For a description of the above algorithms, please refer to the [algorithms secti --- **Disclaimer:** -Not all the algorithms have been tested with Nonconvex. So if you try one and it doesn't work, please open an issue. +Not all the algorithms have been tested with `Nonconvex`. So if you try one and it doesn't work, please open an issue. --- diff --git a/docs/src/algorithms/nomad.md b/docs/src/algorithms/nomad.md new file mode 100644 index 00000000..62faa186 --- /dev/null +++ b/docs/src/algorithms/nomad.md @@ -0,0 +1,40 @@ +# Nonlinear optimization with the MADS (NOMAD) algorithm for continuous and discrete, constrained optimization + +## Description + +[NOMAD.jl](https://github.com/bbopt/NOMAD.jl) is an optimization package wrapping the C++ implementation of the [NOMAD algorithm](https://dl.acm.org/doi/10.1145/1916461.1916468). `NonconvexNOMAD` allows the use of `NOMAD.jl` using the the `NOMADAlg` struct. `NOMAD.jl` supports continuous and integer decision variables as well as bounds and inequality constraints. Linear equality constraints are also supported when no integer decision variables are in the model. + +## Quick start + +Given a model `model` and an initial solution `x0`, the following can be used to optimize the model using `NOMAD`. +```julia +using Nonconvex +Nonconvex.@load NOMAD + +alg = NOMADAlg() +options = NOMADOptions() +result = optimize(model, alg, x0, options = options) +``` +`NOMAD` is an optional dependency of Nonconvex so you need to load the package to be able to use it. + +## Algorithm types + +There are 3 different variants of the `NOMADAlg` struct: +- `NOMADAlg(:explicit)` +- `NOMADAlg(:progressive)` +- `NOMADAlg(:custom)` + +The explicit algorithm ensures all the constraints are satisfied at all times removing any infeasible point from the population. The progressive algorithm allows infeasible points to be part of the population but enforces feasibility in a progressive manner. The custom variant allows the use of flags on each constraint to declare it as `:explicit` or `:progressive`. For instance, assume `model` is the `Nonconvex` model and `g1` and `g2` are 2 constraint functions. +```julia +add_ineq_constraint!(model, g1, flags = [:explicit]) +add_ineq_constraint!(m, g2, flags = [:progressive]) +``` +The above code declares the first constraint as explicit and the second as progressive. In other words, every point violating the first constraint will be removed from the population but the second constraint will be more progressively enforced. + +## Options + +The options keyword argument to the `optimize` function shown above must be an instance of the `NOMADOptions` struct when the algorihm is a `NOMADAlg`. To specify options use keyword arguments in the constructor of `NOMADOptions`, e.g: +```julia +options = NOMADOptions() +``` +All the options that can be set can be found in the [`NOMAD.jl` documentation](https://bbopt.github.io/NOMAD.jl/stable/nomadProblem/). diff --git a/docs/src/algorithms/sdp.md b/docs/src/algorithms/sdp.md index 64665ea9..91a14848 100644 --- a/docs/src/algorithms/sdp.md +++ b/docs/src/algorithms/sdp.md @@ -1,4 +1,4 @@ -# Semidifinite programming +# Interior point meta-algorithm for handling nonlinear semidefinite constraints ## Description diff --git a/docs/src/algorithms/surrogate.md b/docs/src/algorithms/surrogate.md index 6b28aa31..2e025aab 100644 --- a/docs/src/algorithms/surrogate.md +++ b/docs/src/algorithms/surrogate.md @@ -1,4 +1,4 @@ -# Surrogate-assisted Bayesian optimization +# Surrogate-assisted continuous and discrete, constrained optimization ## Description diff --git a/docs/src/algorithms/tobs.md b/docs/src/algorithms/tobs.md new file mode 100644 index 00000000..b6f78c55 --- /dev/null +++ b/docs/src/algorithms/tobs.md @@ -0,0 +1,84 @@ +# Topology optimization of binary structures (TOBS), a nonlinear binary optimization heuristic + +## Description + +The method of topology optimization of binary structures ([TOBS](https://www.sciencedirect.com/science/article/abs/pii/S0168874X17305619?via%3Dihub)) was originally developed in the context of optimal distribution of material in mechanical components. The TOBS algorithm only supports binary decision variables. The TOBS algorithm is a heuristic that relies on the sequential linearization of the objective and constraint functions, progressively enforcing the constraints in the process. The resulting binary linear program can be solved using any mixed integer linear programming (MILP) solver such `Cbc`. This process is repeated iteratively until convergence. This package implements the heuristic for binary nonlinear programming problems. + +## Construct an instance + +To construct an instance of the `TOBS` algorithm, use: +```julia +alg = TOBSAlg() +``` +When optimizing a model using `TOBSAlg`, all the variables are assumed to be binary if their lower and upper bounds are 0 and 1 respectively even if the `isinteger` flag was not used. If there are variables with other bounds' values, the optimization will give an error. + +## Example + +In this example, the classic topology optimization problem of minimizing the compliance of the structure subject to a volume constraint. Begin by installing and loading the packages required. + +```julia +import Nonconvex +Nonconvex.@load TOBS +using Pkg +Pkg.add("TopOpt") +using TopOpt +``` + +Define the problem and its parameters using [TopOpt.jl](https://github.com/JuliaTopOpt/TopOpt.jl). + +```julia +E = 1.0 # Young’s modulus +v = 0.3 # Poisson’s ratio +f = 1.0 # downward force +rmin = 6.0 # filter radius +xmin = 0.001 # minimum density +V = 0.5 # maximum volume fraction +p = 3.0 # SIMP penalty + +# Define FEA problem +problem_size = (160, 100) # size of rectangular mesh +x0 = fill(1.0, prod(problem_size)) # initial design +problem = PointLoadCantilever(Val{:Linear}, problem_size, (1.0, 1.0), E, v, f) +solver = FEASolver(Direct, problem; xmin=xmin) +TopOpt.setpenalty!(solver, p) +cheqfilter = DensityFilter(solver; rmin=rmin) # filter function +comp = TopOpt.Compliance(problem, solver) # compliance function +``` + +Define the objective and constraint functions. + +```julia +obj(x) = comp(cheqfilter(x)) # compliance objective +constr(x) = sum(cheqfilter(x)) / length(x) - V # volume fraction constraint +``` + +Finally, define the optimization problem using `Nonconvex.jl` and optimize it. + +```julia +m = Model(obj) +addvar!(m, zeros(length(x0)), ones(length(x0))) +Nonconvex.add_ineq_constraint!(m, constr) +options = TOBSOptions() + +r = optimize(m, TOBSAlg(), x0; options=options) +r.minimizer +r.minimum +``` + +The following is a visualization of the optimization history using this example. + +![histories](https://user-images.githubusercontent.com/84910559/164938659-797a6a6d-3518-4f7b-a4ff-24b43b822080.png) + +![gif](https://user-images.githubusercontent.com/19524993/167059067-f08502a8-c62d-4d62-a2df-e132efc5e25c.gif) + +## Options + +The following are the options that can be set by passing them to `TOBSOptions`, e.g. `TOBSOptions(movelimit = 0.1)`. +- `movelimit`: the maximum move limit in each iteration as a ratio of the total number of variables. Default value is 0.1, i.e. a maximum of 10% of the variables are allowed to flip value in each iteration. +- `convParam`: the tolerance value. The algrotihm is said to have converged if the moving average of the relative change in the objective value in the last `pastN` iterations is less than `convParam`. Default value is 0.001. +- `pastN`: the number of past iterations used to compute the moving average of the relative change in the objective value. Default value is 20. +- `constrRelax`: the amount of constraint relaxation applied to the linear approximation in each iteration. This is the relative constraint relaxation if the violation is higher than `constrRelax` and the absolute constraint relaxation otherwise. Default value is 0.1. +- `timeLimit`: the time limit (in seconds) of each MILP solve for the linearized sub-problem. Default value is 1.0. +- `optimizer`: the `JuMP` optimizer type used to solve the MILP sub-problem. Default value is `Cbc.Optimizer`. +- `maxiter`: the maximum number of iterations for the algorithm. Default value is 200. +- `timeStable`: a boolean value that when set to `true` switches on the time stability filter of the objective's gradient, discussed in the paper. Default value is `true`. diff --git a/docs/src/index.md b/docs/src/index.md index 9c49e6b9..1c4b16d8 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -1,6 +1,6 @@ -# Nonconvex.jl Documentation +# `Nonconvex.jl` Documentation -Nonconvex.jl is a Julia package that implements and wraps a number of constrained nonlinear and mixed integer nonlinear programming solvers. There are 3 features of Nonconvex.jl compared to similar packages such as JuMP.jl and NLPModels.jl: +`Nonconvex.jl` is a Julia package that implements and wraps a number of constrained nonlinear and mixed integer nonlinear programming solvers. There are 3 focus points of `Nonconvex.jl` compared to similar packages such as `JuMP.jl` and `NLPModels.jl`: 1. Emphasis on a function-based API. Objectives and constraints are normal Julia functions. 2. The ability to nest algorithms to create more complicated algorithms. @@ -8,7 +8,7 @@ Nonconvex.jl is a Julia package that implements and wraps a number of constraine ## Installing Nonconvex -To install Nonconvex.jl, open a Julia REPL and type `]` to enter the package mode. Then run: +To install `Nonconvex.jl`, open a Julia REPL and type `]` to enter the package mode. Then run: ```julia add Nonconvex ``` @@ -20,7 +20,7 @@ using Pkg; Pkg.add("Nonconvex") ## Loading Nonconvex -To load and start using Nonconvex.jl, run: +To load and start using `Nonconvex.jl`, run: ```julia using Nonconvex ``` diff --git a/docs/src/problem/problem.md b/docs/src/problem/problem.md index 0215308b..63846625 100644 --- a/docs/src/problem/problem.md +++ b/docs/src/problem/problem.md @@ -1,6 +1,6 @@ # Problem definition -There are 3 ways to define a model in Nonconvex.jl: +There are 3 ways to define a model in `Nonconvex.jl`: 1. `Model` which assumes all the variables are indexed by an integer index starting from 1. The decision variables are therefore a vector. 2. `DictModel` which assumes each variable has a name. The decision variables are stored in an `OrderedDict`, an ordered dictionary data structure. 3. Start from `JuMP.Model` and convert it to `DictModel`. This is convenient to make use of `JuMP`'s user-friendly macros for variable and linear expression, objective or constraint definitions. diff --git a/src/Nonconvex.jl b/src/Nonconvex.jl index aecb5511..e27d3ad1 100644 --- a/src/Nonconvex.jl +++ b/src/Nonconvex.jl @@ -37,6 +37,12 @@ function _load(algo) return install_and_load_module(:NonconvexSearch) elseif algo in ("Hyperopt", "Deflated", "Multistart", "HyperoptAlg", "DeflatedAlg") return install_and_load_module(:NonconvexMultistart) + elseif algo == "TOBS" + return install_and_load_module(:NonconvexTOBS) + elseif algo == "Metaheuristics" + return install_and_load_module(:NonconvexMetaheuristics) + elseif algo == "NOMAD" + return install_and_load_module(:NonconvexNOMAD) else throw("Unsupported algorithm. Please check the documentation of Nonconvex.jl.") end @@ -53,9 +59,9 @@ function install_and_load_module(mod) @info "Couldn't find the package $modname. Attempting to install it." try Pkg.add(string(modname)) - catch + catch err @info "Package installation failed! Please report an issue." - return + rethrow(err) end @info "$modname installed." @info "Attempting to load the package $modname." diff --git a/test/runtests.jl b/test/runtests.jl index 94a97ab7..f6ee3445 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -1,4 +1,4 @@ -using Test, Nonconvex +using Test, Nonconvex, Pkg @test_throws ArgumentError using NonconvexIpopt Nonconvex.@load Ipopt @@ -47,3 +47,16 @@ LS1Alg() @test_throws ArgumentError using NonconvexMultistart Nonconvex.@load Multistart HyperoptAlg(IpoptAlg()) + +@test_throws ArgumentError using NonconvexTOBS +Nonconvex.@load TOBS +TOBSAlg() + +@test_throws ArgumentError using NonconvexMetaheuristics +Nonconvex.@load Metaheuristics +MetaheuristicsAlg(ECA) + +Pkg.rm("NonconvexPercival") # https://github.com/ds4dm/Tulip.jl/issues/125 +@test_throws ArgumentError using NonconvexNOMAD +Nonconvex.@load NOMAD +NOMADAlg()