diff --git a/.gitignore b/.gitignore index 9f0ea459..3d28fc59 100644 --- a/.gitignore +++ b/.gitignore @@ -3,4 +3,4 @@ *.jl.mem *.mem *.cov -docs/build/ +docs/build/ \ No newline at end of file diff --git a/NEWS.md b/NEWS.md index e21147a1..cd06bdca 100644 --- a/NEWS.md +++ b/NEWS.md @@ -44,7 +44,7 @@ ## [v0.7.0](https://github.com/PSORLab/EAGO.jl/releases/tag/v0.7.0) (March 28, 2022) - Added envelopes of activation functions: `xabsx`, `logcosh` -- Added `estimator_extrema`, `estimator_under`, and `estimator_over` functions for McCormick relaxations. +- Added variations of `estimator_extrema`, `estimator_under`, and `estimator_over` functions to EAGO for bilinear relaxations. - Moved various functions and related structures to new files. - Added `RelaxCache` structure to hold relaxed problem information. - Updated forward and reverse propagation. diff --git a/README.md b/README.md index d6541bdc..b915b1c9 100644 --- a/README.md +++ b/README.md @@ -4,31 +4,42 @@ EAGO is an open-source development environment for **robust and global optimization** in Julia. -| **Documentation** | **Linux/OS/Windows** | **Persistent DOI** | -|:-----------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:| -| [![](https://img.shields.io/badge/docs-stable-blue.svg)](https://PSORLab.github.io/EAGO.jl/stable) [![](https://img.shields.io/badge/docs-latest-blue.svg)](https://PSORLab.github.io/EAGO.jl/dev) | [![Build Status](https://github.com/PSORLab/EAGO.jl/workflows/CI/badge.svg?branch=master)](https://github.com/PSORLab/EAGO.jl/actions?query=workflow%3ACI) | [![DOI](https://zenodo.org/badge/108954118.svg)](https://zenodo.org/badge/latestdoi/108954118) | +| **PSOR Lab** | **Build Status** | +|:------------:|:-----------------------------------------------------------------------------------------------:| +| [![](https://img.shields.io/badge/Developed_by-PSOR_Lab-342674)](https://psor.uconn.edu/) | [![Build Status](https://github.com/PSORLab/EAGO.jl/workflows/CI/badge.svg?branch=master)](https://github.com/PSORLab/EAGO.jl/actions?query=workflow%3ACI) [![codecov](https://codecov.io/gh/PSORLab/EAGO.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/PSORLab/EAGO.jl)| -| **Coverage** | **Chat** | -|:------------:|:------------:| -| [![codecov](https://codecov.io/gh/PSORLab/EAGO.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/PSORLab/EAGO.jl) | [![Join the chat at https://gitter.im/EAGODevelopment](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/EAGODevelopment/Lobby?utm_source=share-link&utm_medium=link&utm_campaign=share-link) +| **Documentation** | **Persistent DOI** | +|:-----------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:| +| [![](https://img.shields.io/badge/docs-stable-blue.svg)](https://PSORLab.github.io/EAGO.jl/stable) [![](https://img.shields.io/badge/docs-latest-blue.svg)](https://PSORLab.github.io/EAGO.jl/dev) | [![DOI](https://zenodo.org/badge/108954118.svg)](https://zenodo.org/badge/latestdoi/108954118) | + +| **Chat** | +|:------------:| +| [![Join the chat at https://gitter.im/EAGODevelopment](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/EAGODevelopment/Lobby?utm_source=share-link&utm_medium=link&utm_campaign=share-link) ## EAGO's Optimizer Capabilities EAGO is a deterministic global optimizer designed to address a wide variety of optimization problems, emphasizing nonlinear programs (NLPs), by propagating McCormick relaxations along the factorable structure of each expression in the NLP. Most operators supported by modern automatic differentiation (AD) packages (e.g., `+`, `sin`, `cosh`) are supported by EAGO and a number utilities for sanitizing native Julia code and generating relaxations on a wide variety of user-defined functions have been included. Currently, EAGO supports problems that have a priori variable bounds defined and have differentiable constraints. That is, problems should be specified in the generic form below: -

- +$$ +\begin{align*} +f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\ +{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0 \\ +& \mathbf g(\mathbf y) \leq \mathbf 0 \\ +& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb{IR}^{n} \\ +& \qquad \mathbf y^{\mathbf L}, \mathbf y^{\mathbf U} \in \mathbb R^{n} +\end{align*} +$$ ## EAGO's Relaxations -For each nonlinear term, EAGO makes use of factorable representations to construct bounds and relaxations. In the case of `f(x) = x(x-5)sin(x)`, a list is generated and rules for constructing McCormick relaxations are used to formulate relaxations in the original decision space, *X* [[1](#references)]: +For each nonlinear term, EAGO makes use of factorable representations to construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a list is generated and rules for constructing McCormick relaxations are used to formulate relaxations in the original decision space, $X$ [[1](#references)]: -- *v*1 = *x* -- *v*2 = *v*1 - 5 -- *v*3 = sin(*v*1) -- *v*4 = *v*1*v*2 -- *v*5 = *v*4*v*3 -- f(x) = *v*5 +- $v_{1} = x$ +- $v_{2} = v_{1} - 5$ +- $v_{3} = \sin(v_{1})$ +- $v_{4} = v_{1} v_{2}$ +- $v_{5} = v_{4} v_{3}$ +- $f(x) = v_{5}$

@@ -39,8 +50,20 @@ Either these original relaxations, differentiable McCormick relaxations [[2](#re EAGO makes use of the JuMP algebraic modeling language to improve the user's experience in setting up optimization models. Consider the familiar "process" problem instance [[5](#references)]: -

- +$$ +\begin{align*} +& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\ +{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\ +& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0 \\ +& -(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0 \\ +& -(x_{2} + x_{5}) / x_{1} + x_{8} = 0 \\ +& -x_{1} + 1.22 x_{4} - x_{5} = 0 \\ +& x_{9} + 0.222 x_{10} - 35.82 = 0 \\ +& -3.0 x_{7} + x_{10} + 133.0 = 0 \\ +& X = [10, 2000] \times [0, 16000] \times [0, 120] \times [0, 5000] \\ +& \qquad \times [0, 2000] \times [85, 93] \times [90,9 5] \times [3, 12] \times [1.2, 4] \times [145, 162] +\end{align*} +$$ This model can be formulated using JuMP code as: @@ -56,15 +79,15 @@ xU = [2000.0; 16000.0; 120.0; 5000.0; 2000.0; 93.0; 95.0; 12.0; 4.0; 162.0] @variable(m, xL[i] <= x[i=1:10] <= xU[i]) # Define nonlinear constraints -@NLconstraint(m, e1, -x[1]*(1.12+0.13167*x[8]-0.00667* (x[8])^2)+x[4] == 0.0) -@NLconstraint(m, e3, -0.001*x[4]*x[9]*x[6]/(98-x[6])+x[3] == 0.0) -@NLconstraint(m, e4, -(1.098*x[8]-0.038* (x[8])^2)-0.325*x[6]+x[7] == 57.425) -@NLconstraint(m, e5, -(x[2]+x[5])/x[1]+x[8] == 0.0) +@NLconstraint(m, e1, -x[1]*(1.12 + 0.13167*x[8] - 0.00667*(x[8])^2) + x[4] == 0.0) +@NLconstraint(m, e3, -0.001*x[4]*x[9]*x[6]/(98.0 - x[6]) + x[3] == 0.0) +@NLconstraint(m, e4, -(1.098*x[8] - 0.038*(x[8])^2) - 0.325*x[6] + x[7] == 57.425) +@NLconstraint(m, e5, -(x[2] + x[5])/x[1] + x[8] == 0.0) # Define linear constraints -@constraint(m, e2, -x[1]+1.22*x[4]-x[5] == 0.0) -@constraint(m, e6, x[9]+0.222*x[10] == 35.82) -@constraint(m, e7, -3*x[7]+x[10] == -133.0) +@constraint(m, e2, -x[1] + 1.22*x[4] - x[5] == 0.0) +@constraint(m, e6, x[9] + 0.222*x[10] == 35.82) +@constraint(m, e7, -3.0*x[7] + x[10] == -133.0) # Define nonlinear objective @NLobjective(m, Max, 0.063*x[4]*x[7] - 5.04*x[1] - 0.035*x[2] - 10*x[3] - 3.36*x[5]) @@ -72,22 +95,6 @@ xU = [2000.0; 16000.0; 120.0; 5000.0; 2000.0; 93.0; 95.0; 12.0; 4.0; 162.0] # Solve the optimization problem JuMP.optimize!(m) ``` - Special handling has been included for linear/quadratic functions defined using the `@constraint` macro in JuMP and these can generally be expected to perform better than specifying quadratic or linear terms with the `@NLconstraint` macro. @@ -115,12 +122,11 @@ The EAGO package has numerous features: a solver accessible from JuMP/MathOptInt - Models, nodes, expressions, constraints, and operators are now compatible with MOI. - Added logic and comparison operators to `EAGO.OperatorRegistry`. -For a full list of EAGO release news, click [here](https://psorlab.github.io/EAGO.jl/stable/news/). +For a full list of EAGO release news, click [here](https://github.com/PSORLab/EAGO.jl/blob/master/NEWS.md). ## Installing EAGO -EAGO is a registered Julia package and it can be installed using the Julia package manager. -From the Julia REPL, type `]` to enter the Package manager (Pkg) mode and run the following command: +EAGO is a registered Julia package and it can be installed using the Julia package manager. From the Julia REPL, type `]` to enter the Package manager (Pkg) mode and run the following command: ```jldoctest pkg> add EAGO @@ -132,8 +138,7 @@ Currently, EAGO is compatible with version 1.12 of JuMP. This allows a replicati pkg> add JuMP ``` -EAGO v0.8.1 is the current tagged version and requires Julia 1.6+ for full functionality (however Julia 1.0+ versions support partial functionality). Use with version 1.8 is recommended as the majority of in-house testing has occurred using this version of Julia. The user is directed to the [High-Performance Configuration](https://psorlab.github.io/EAGO.jl/optimizer/high_performance/) for instructions on how to install a high performance version of EAGO (rather than the basic entirely open-source version). -If any issues are encountered when loading EAGO (or when using it), please submit an issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues). +EAGO v0.8.1 is the current tagged version and requires Julia 1.6+ for full functionality (however Julia 1.0+ versions support partial functionality). Use with version 1.8 is recommended as the majority of in-house testing has occurred using this version of Julia. The user is directed to the [High-Performance Configuration](https://psorlab.github.io/EAGO.jl/optimizer/high_performance/) for instructions on how to install a high performance version of EAGO (rather than the basic entirely open-source version). If any issues are encountered when loading EAGO (or when using it), please submit an issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues). ## Bug Reporting, Support, and Feature Requests @@ -155,15 +160,17 @@ Please report bugs or feature requests by opening an issue using the GitHub [iss ## Citing EAGO Please cite the following paper when using EAGO. In plain text form this is: + ``` -M. E. Wilhelm & M. D. Stuber (2022) EAGO.jl: easy advanced global optimization in Julia, -Optimization Methods and Software, 37:2, 425-450, DOI: 10.1080/10556788.2020.1786566 +Wilhelm, M.E. and Stuber, M.D. EAGO.jl: easy advanced global optimization in Julia. +Optimization Methods and Software. 37(2): 425-450 (2022). DOI: 10.1080/10556788.2020.1786566 ``` A BibTeX entry is given below and a corresponding .bib file is given in citation.bib. + ```bibtex @article{doi:10.1080/10556788.2020.1786566, -author = {M. E. Wilhelm and M. D. Stuber}, +author = {Wilhelm, M.E. and Stuber, M.D.}, title = {EAGO.jl: easy advanced global optimization in Julia}, journal = {Optimization Methods and Software}, volume = {37}, @@ -179,15 +186,14 @@ eprint = {https://doi.org/10.1080/10556788.2020.1786566} ## Related Packages -- [ValidatedNumerics.jl](https://github.com/JuliaIntervals/ValidatedNumerics.jl): A Julia library for validated interval calculations, including basic interval extensions, constraint programming, and interval contractors -- [MAiNGO](http://swmath.org/software/27878): An open-source mixed-integer nonlinear programming package in C++ that utilizes MC++ for relaxations. -- [MC++](https://omega-icl.github.io/mcpp/): A mature McCormick relaxation package in C++ that also includes McCormick-Taylor, Chebyshev -Polyhedral and Ellipsoidal arithmetics. +- [ValidatedNumerics.jl](https://github.com/JuliaIntervals/ValidatedNumerics.jl): A Julia library for validated interval calculations, including basic interval extensions, constraint programming, and interval contractors +- [MAiNGO](https://avt-svt.pages.rwth-aachen.de/public/maingo/): An open-source mixed-integer nonlinear programming package in C++ that utilizes MC++ for relaxations +- [MC++](https://github.com/coin-or/MCpp): A mature McCormick relaxation package in C++ that also includes McCormick-Taylor, Chebyshev Polyhedral, and Ellipsoidal arithmetics ## References -1. A. Mitsos, B. Chachuat, and P. I. Barton. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*, 20(2):573–601, 2009. -2. K.A. Khan, HAJ Watson, P.I. Barton. **Differentiable McCormick relaxations.** *Journal of Global Optimization*, 67(4):687-729 (2017). -3. Stuber, M.D., Scott, J.K., Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optim. Methods Softw.* 30(3), 424–460 (2015) -4. A., Wechsung JK Scott, HAJ Watson, and PI Barton. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1):1-36 (2015). -5. Bracken, Jerome and McCormick, Garth P. **Selected Applications of Nonlinear Programming**, John Wiley and Sons, New York, 1968. +1. Mitsos, A., Chachuat, B., and Barton, P.I. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*. 20(2): 573–601 (2009). +2. Khan, K.A., Watson, H.A.J., and Barton, P.I. **Differentiable McCormick relaxations.** *Journal of Global Optimization*. 67(4): 687-729 (2017). +3. Stuber, M.D., Scott, J.K., and Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optimization Methods and Software* 30(3): 424–460 (2015). +4. Wechsung, A., Scott, J.K., Watson, H.A.J., and Barton, P.I. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1): 1-36 (2015). +5. Bracken, J., and McCormick, G.P. *Selected Applications of Nonlinear Programming.* John Wiley and Sons, New York (1968). \ No newline at end of file diff --git a/citation.bib b/citation.bib index 24275398..c33e004d 100644 --- a/citation.bib +++ b/citation.bib @@ -1,12 +1,5 @@ - - - - - - - @article{doi:10.1080/10556788.2020.1786566, -author = {M. E. Wilhelm and M. D. Stuber}, +author = {Wilhelm, M.E. and Stuber, M.D.}, title = {EAGO.jl: easy advanced global optimization in Julia}, journal = {Optimization Methods and Software}, volume = {37}, @@ -15,22 +8,6 @@ @article{doi:10.1080/10556788.2020.1786566 year = {2022}, publisher = {Taylor & Francis}, doi = {10.1080/10556788.2020.1786566}, - -URL = { - - https://doi.org/10.1080/10556788.2020.1786566 - - - -}, -eprint = { - - https://doi.org/10.1080/10556788.2020.1786566 - - - -} - -} - - +URL = {https://doi.org/10.1080/10556788.2020.1786566}, +eprint = {https://doi.org/10.1080/10556788.2020.1786566} +} \ No newline at end of file diff --git a/docs/Project.toml b/docs/Project.toml index 26be57ee..e325b26a 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -9,5 +9,5 @@ McCormick = "53c679d3-6890-5091-8386-c291e8c8aaa1" StaticArrays = "90137ffa-7385-5640-81b9-e52037218182" [compat] -Documenter = "0.27.9, 0.28" JuMP = "1.12" +Documenter = "~1" diff --git a/docs/make.jl b/docs/make.jl index 98aeddb4..7daccc80 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -7,9 +7,9 @@ import McCormick: final_cut, mid3v, precond_and_contract!, AbstractMCCallback, p dline_seg, correct_exp!, cut, mid_grad, preconditioner_storage, newton, secant, MCCallback, contract!, affine_exp! -import EAGO: ExtensionType, Evaluator, variable_dbbt!, set_current_node!, +import EAGO: ExtensionType, Evaluator, variable_dbbt!, VariableInfo, Log, aggressive_filtering!, - bool_indx_diff, trivial_filtering!, SIPResult, SIPProblem, + bool_indx_diff!, trivial_filtering!, SIPResult, SIPProblem, GlobalOptimizer, InputProblem, ParsedProblem, is_integer_feasible_relaxed, local_problem_status, default_upper_heuristic, label_branch_variables!, label_fixed_variables!, AbstractDirectedGraph, AbstractCache, @@ -19,46 +19,48 @@ import EAGO: ExtensionType, Evaluator, variable_dbbt!, set_current_node!, import EAGO.Script: dag_flattening!, register_substitution!, Template_Graph, Template_Node, scrub, scrub!, flatten_expression! -const MOI = MathOptInterface - @info "Making documentation..." makedocs(modules = [EAGO, McCormick], doctest = false, + warnonly = [:docs_block, :missing_docs], format = Documenter.HTML( prettyurls = get(ENV, "CI", nothing) == "true", canonical = "https://PSORLab.github.io/EAGO.jl/stable/", collapselevel = 1, + assets = ["assets/favicon.ico"] ), authors = "Matthew Wilhelm, Robert Gottlieb, Dimitri Alston, and Matthew Stuber", sitename = "EAGO.jl", pages = Any["Introduction" => "index.md", - "Quick Start" => Any["quick_start/qs_landing.md", - "quick_start/guidelines.md", - "quick_start/explicit_ann.md", - "quick_start/ex2.md", - "quick_start/quasiconvex.md", - "quick_start/alpha_bb.md" - ], - "McCormick Operator Library" => Any["mccormick/overview.md", - "mccormick/usage.md", - "mccormick/operators.md", - "mccormick/type.md", - "mccormick/implicit.md" - ], - "Optimizer" => Any["optimizer/optimizer.md", - "optimizer/bnb_back.md", - "optimizer/relax_back.md", - "optimizer/domain_reduction.md", - "optimizer/high_performance.md", - "optimizer/udf_utilities.md" - ], - "Semi-Infinite Programming" => "semiinfinite/semiinfinite.md", - "Contributing to EAGO" => Any["dev/contributing.md", - "dev/future.md" + "Manual" => Any["Optimizer" => Any["optimizer/optimizer.md", + "optimizer/bnb_back.md", + "optimizer/relax_back.md", + "optimizer/domain_reduction.md", + "optimizer/high_performance.md", + "optimizer/udf_utilities.md" ], + "McCormick.jl" => Any["mccormick/overview.md", + "mccormick/usage.md", + "mccormick/operators.md", + "mccormick/type.md", + "mccormick/implicit.md" + ], + "Semi-Infinite Programming" => "semiinfinite/semiinfinite.md", + ], + "Customization" => "custom_guidelines.md", + "Examples" => Any["examples/explicit_ann.md", + "examples/interval_bb.md", + "examples/quasiconvex.md", + "examples/alpha_bb.md" + ], + "API Reference" => Any["dev/api_types.md", + "dev/api_functions.md" + ], + "Contributing" => "dev/contributing.md", + "News" => "news.md", "Citing EAGO" => "cite.md", - "News" => "news.md", - "References" => "ref.md"] + "References" => "ref.md" + ] ) @info "Deploying documentation..." diff --git a/docs/src/assets/favicon.ico b/docs/src/assets/favicon.ico index 68821591..dcdda6ff 100644 Binary files a/docs/src/assets/favicon.ico and b/docs/src/assets/favicon.ico differ diff --git a/docs/src/cite.md b/docs/src/cite.md index 26ebc0ca..a3fb1a44 100644 --- a/docs/src/cite.md +++ b/docs/src/cite.md @@ -4,6 +4,6 @@ Please cite the following paper when using EAGO.jl: ``` -M. E. Wilhelm & M. D. Stuber (2022) EAGO.jl: easy advanced global optimization in Julia, -Optimization Methods and Software, 37:2, 425-450, DOI: 10.1080/10556788.2020.1786566 +Wilhelm, M.E. and Stuber, M.D. EAGO.jl: easy advanced global optimization in Julia. +Optimization Methods and Software. 37(2): 425-450 (2022). DOI: 10.1080/10556788.2020.1786566 ``` diff --git a/docs/src/custom_guidelines.md b/docs/src/custom_guidelines.md new file mode 100644 index 00000000..a4c01959 --- /dev/null +++ b/docs/src/custom_guidelines.md @@ -0,0 +1,118 @@ +# Customization Guidelines + +This section contains general guidelines on how the functionality of EAGO can be extended for specific use cases. Many functions in EAGO are extensible, so examples are not provided for every possible case, but some important functions are covered. If there is a use case you do not see provided here, but would like to see, please contact [Robert Gottlieb](https://psor.uconn.edu/person/robert-gottlieb/). + +## 1) Creating an Extension + +Extensibility in EAGO is based on the [`ExtensionType`](@ref). To create customized functions, the recommended method is to create a new structure, as follows: + +```julia +using EAGO + +struct MyNewStruct <: EAGO.ExtensionType end +``` + +To let EAGO know that you would like to use this extension (and any functions you overload), when you create the JuMP model, declare your new type in the `subsolver_block` field of the [`Optimizer`](@ref) as follows: + +```julia +using JuMP + +factory = () -> EAGO.Optimizer(SubSolvers(; t = MyNewStruct() )) +model = Model(factory) +``` + +The key point to note here is that the new structure is set to [`SubSolvers.t`](@ref EAGO.SubSolvers), which is the field that holds any user-defined extension type. Now, when EAGO calls any of its functions, it will check to see if a custom function for this new extension has been created. If so, it will use the new one; if not, it will use the default version of that function. + +## 2) Preprocessing + +In EAGO's branch-and-bound routine, preprocessing is performed prior to solving the lower and upper problems. By default, linear and quadratic contractor methods are performed, followed by interval constraint propagation, then optimization-based bounds tightening. An important outcome of EAGO's default preprocessing is that the `_preprocess_feasibility` field of the [`GlobalOptimizer`](@ref) is set to `true`, unless the subproblem at the current node has been proven infeasible. Therefore, to bypass preprocessing for your own problem, you must, at a minimum, set this field to `true`. For example: + +```julia +import EAGO: preprocess! +function EAGO.preprocess!(t::MyNewStruct, x::EAGO.GlobalOptimizer) + x._preprocess_feasibility = true +end +``` + +The user-defined preprocessing step can be as simple or complex as desired, but if `_preprocess_feasibility` is not set to `true`, EAGO will assume each node is infeasible. + +## 3) Lower Problem + +By default, EAGO applies Kelley's cutting-plane algorithm [[1](#References)] to solve the lower-problem. This can be overloaded using the same syntax as for the other functions. Necessary changes to the [`GlobalOptimizer`](@ref) that occur within the lower problem are changing the `_lower_objective_value` and `_lower_feasibility` fields. If the `_lower_objective_value` field is not changed, branch-and-bound will not update the lower bound. If `_lower_feasibility` is not set to `true`, the node will be discarded as infeasible. A minimum functional (though not useful) lower problem extension is as follows: + +```julia +import EAGO: lower_problem! +function EAGO.lower_problem!(t::MyNewStruct, x::EAGO.GlobalOptimizer) + x._lower_objective_value = -Inf + x._lower_feasibility = true +end +``` + +Any arbitrarily complex lower problem can be substituted here in place of EAGO's default method, as long as these fields are updated. Note also that, although there is a separate upper problem function, if the lower problem is being replaced by an algorithm that also calculates an upper objective value, the necessary fields to update in [`upper_problem!`](@ref EAGO.upper_problem!) can simply be updated here, and the [`upper_problem!`](@ref EAGO.upper_problem!) can be overloaded by a function that does `nothing`. + +## 4) Upper Problem + +By default, the upper-bounding problem is run on every node up to depth `upper_bounding_depth`, and is triggered with a probability of `0.5^(depth - upper_bounding_depth)` afterwards for continuous problems. For integer problems, this approach is used in addition to running on every node up to depth `upper_bounding_depth + cont_depth`, with another trigger of probability `0.5^(depth - upper_bounding_depth - cont_depth)`. The `upper_bounding_depth` and `cont_depth` are fields of [`EAGOParameters`](@ref) and can be changed from their default values when the JuMP model is created, or at any time afterwards. If any of these trigger conditions are met, the default EAGO upper problem runs a local NLP solve. + +The important fields to update in the [`upper_problem!`](@ref EAGO.upper_problem!) are the `_upper_objective_value` and `_upper_feasibility` fields. If `_upper_objective_value` is not updated, the upper bound in the branch-and-bound algorithm will not update and the problem will not converge. If `_upper_feasibility` is not set to true, then any changes made to `_upper_objective_value` will not be updated for each node and the problem will not converge. A minimum functional (though not useful) upper problem extension is as follows: + +```julia +import EAGO: upper_problem! +function upper_problem!(t::MyNewStruct, x::EAGO.GlobalOptimizer) + x._upper_objective_value = Inf + x._upper_feasibility = true +end +``` + +This example upper problem will set the upper bound on the objective value to `Inf` for each node. Given that no useful information is provided here, we could also have set `x_upper_feasibility` to `false` (or equivalently, remove the line where we set it to `true`), and the global upper bound would never be updated. + +Note that if the `_upper_objective_value` is changed elsewhere, such as in the definition for the [`lower_problem!`](@ref EAGO.lower_problem!), the `_upper_feasibility` flag must be set to `true`. If this is not done, the change to the `_upper_objective_value` will be discarded. + +## 5) Convergence Check + +By default, EAGO checks to see if the lower and upper bounds have converged to within either the absolute or relative tolerance. This method of checking convergence may not be desired if, for example, only the absolute tolerance is relevant, and you want to ensure that the program does not end prematurely due to a relative tolerance limit being reached. The fields to check are `_lower_objective_value` and `_upper_objective_value` for the best-identified global lower and upper bounds, respectively. This function should return `true` if the lower and upper bounds have converged, and `false` otherwise. An example of how to specify that the convergence check only use the absolute tolerance is as follows: + +```julia +import EAGO: convergence_check +function EAGO.convergence_check(t::MyNewStruct, x::EAGO.GlobalOptimizer) + gap = (x._upper_objective_value - x._lower_objective_value) + return (gap <= x._parameters.absolute_tolerance) +end +``` + +## 6) Postprocessing + +Postprocessing is the final step before a node is branched on. By default, EAGO performs duality-based bounds tightening [[2](#References)] up to an iteration limit set by `dbbt_depth` in the [`EAGOParameters`](@ref) field. The important field to update in postprocessing is `_postprocess_feasibility`, which must be set to `true` for EAGO to branch on any given node. The minimum working postprocessing function is therefore: + +```julia +import EAGO: postprocess! +function EAGO.postprocess!(t::MyNewStruct, x::EAGO.GlobalOptimizer) + x._postprocess_feasibility = true +end +``` + +If `_postprocess_feasibility` is not set to `true`, no nodes will be branched on. + +## 7) Termination Check + +This is the check that occurs on each iteration of the branch-and-bound algorithm that determines whether the algorithm continues or not. By default, several conditions are checked for such as the satisfaction of absolute or relative tolerances, solution infeasibility, or other specified limits. This function returns `true` if any of the stopping conditions have been met, and branch-and-bound should stop, and `false` otherwise. The important fields to update are `_end_state`, which takes values of `EAGO.GlobalEndState`, `_termination_status_code`, which takes values of [`MOI.TerminationStatusCode`](https://jump.dev/MathOptInterface.jl/stable/reference/models/#MathOptInterface.TerminationStatusCode), and `_result_status_code`, which takes values of [`MOI.ResultStatusCode`](https://jump.dev/MathOptInterface.jl/stable/reference/models/#MathOptInterface.ResultStatusCode). Combined, these fields provide information about the branch-and-bound completion status and result feasibility. + +As an example, suppose we have updated the [`convergence_check`](@ref EAGO.convergence_check) function to only check for absolute tolerance, and based on our knowledge of the problem, this is the only condition we care about, and we know the solution will be optimal. We could then overload the [`termination_check`](@ref EAGO.termination_check) function as follows: + +```julia +import EAGO: termination_check +function EAGO.termination_check(t::MyNewStruct, x::EAGO.GlobalOptimizer) + flag = EAGO.convergence_check(t, x) + if flag + x._end_state = EAGO.GS_OPTIMAL + x._termination_status_code = MathOptInterface.OPTIMAL + x._result_status_code = MathOptInterface.FEASIBLE_POINT + end + return flag +end +``` + +## References + +1. Kelley, J. E. “The Cutting-Plane Method for Solving Convex Programs.” *Journal of the Society for Industrial and Applied Mathematics*, vol. 8, no. 4, pp. 703–12 (1960). +2. Tawarmalani, M., Sahinidis, N. V. "Global optimization of mixed-integer nonlinear programs: A theoretical and computational study." *Math. Program., Ser. A*, 99, pp. 563-591 (2004). diff --git a/docs/src/dev/api_functions.md b/docs/src/dev/api_functions.md new file mode 100644 index 00000000..aca52cd6 --- /dev/null +++ b/docs/src/dev/api_functions.md @@ -0,0 +1,6 @@ +# Functions + +```@autodocs; canonical=false +Modules = [EAGO] +Order = [:function] +``` \ No newline at end of file diff --git a/docs/src/dev/api_types.md b/docs/src/dev/api_types.md new file mode 100644 index 00000000..0709be1c --- /dev/null +++ b/docs/src/dev/api_types.md @@ -0,0 +1,6 @@ +# Types + +```@autodocs; canonical=false +Modules = [EAGO] +Order = [:type] +``` \ No newline at end of file diff --git a/docs/src/dev/contributing.md b/docs/src/dev/contributing.md index 91ab9a2c..fa5fc196 100644 --- a/docs/src/dev/contributing.md +++ b/docs/src/dev/contributing.md @@ -1,10 +1,8 @@ -# How to Contribute +# How to Contribute to EAGO -We're always happy to welcome work with additional collaborators and contributors. One -of the easy ways for newcomers to contribute is by adding additional relaxations. +We're always happy to welcome work with additional collaborators and contributors. One of the easy ways for newcomers to contribute is by adding additional McCormick relaxations. -If you have any requests for additional functionality, bug fixes, or comments, -please feel free to open a new issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues) or reach out to us. +If you have any requests for additional functionality, bug fixes, or comments, please feel free to open a new issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues) or reach out to us. ## Contact Us @@ -12,4 +10,4 @@ Please direct technical issues and/or bugs to the active developers: - [Robert Gottlieb](https://psor.uconn.edu/person/robert-gottlieb/) - [Dimitri Alston](https://psor.uconn.edu/person/dimitri-alston/) -All other questions should be directed to [Prof. Stuber](https://chemical-biomolecular.engr.uconn.edu/person/matthew-stuber/). +All other questions should be directed to [Prof. Stuber](https://chemical-biomolecular.engr.uconn.edu/people/faculty/stuber-matthew/). diff --git a/docs/src/dev/future.md b/docs/src/dev/future.md index 9f879d21..8a4a035a 100644 --- a/docs/src/dev/future.md +++ b/docs/src/dev/future.md @@ -12,7 +12,7 @@ ## Other Things on the Wishlist (But Not Actively Being Worked On) * Implement the interval constraint propagation scheme presented in Vu 2008. For improved convergences. -* A parametric bisection routine will be updated that can divide the `(X,P)` space into a series of boxes that all contain unique branches of the implicit function `p->y(p)`. +* A parametric bisection routine will be updated that can divide the ``(X, P)`` space into a series of boxes that all contain unique branches of the implicit function ``p -> y(p)``. * Provide a better interface the nonconvex semi-infinite programs solvers (JuMPeR extension?). * Add additional McCormick relaxations. * Add handling for domain reduction of special expression forms. diff --git a/docs/src/quick_start/alpha_bb.md b/docs/src/examples/alpha_bb.md similarity index 65% rename from docs/src/quick_start/alpha_bb.md rename to docs/src/examples/alpha_bb.md index 882002dd..e586170b 100644 --- a/docs/src/quick_start/alpha_bb.md +++ b/docs/src/examples/alpha_bb.md @@ -8,9 +8,9 @@ In this example, we will demonstrate the use of a user-defined lower-bounding pr ```math \begin{aligned} -&\min_{\mathbf x\in\mathbb R^2}\frac{1}{2}\mathbf x^{\rm T}\mathbf Q_f\mathbf x+\mathbf c_f^{\rm T}\mathbf x\\ -{\rm s.t.}\;\;&g_1(\mathbf x)=\frac{1}{2}\mathbf x^{\rm T}\mathbf Q_{g_1}\mathbf x+\mathbf c_{g_1}^{\rm T}\mathbf x\le 0\\ -&g_2(\mathbf x)=\frac{1}{2}\mathbf x^{\rm T}\mathbf Q_{g_2}\mathbf x+\mathbf c_{g_2}^{\rm T}\mathbf x\le 0\\ +& \min_{\mathbf x \in \mathbb{IR}^{2}} \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{f} \mathbf x + \mathbf c_{f}^{\rm T} \mathbf x \\ +{\rm s.t.} \; \; & g_{1}(\mathbf x) = \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{1}}\mathbf x + \mathbf c_{g_{1}}^{\rm T} \mathbf x \leq 0 \\ +& g_{2}(\mathbf x) = \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{2}} \mathbf x + \mathbf c_{g_{2}}^{\rm T} \mathbf x \leq 0 \\ \end{aligned} ``` @@ -27,26 +27,26 @@ For convenience, we'll define the following function that returns all the proble ```julia function QCQP_setup() - Qf = [3. 3/2.;3/2. -5.] - cf = [3.;2.] + Qf = [3.0, 3/2; 3/2, -5.0] + cf = [3.0; 2.0] - Qg1 = [-2. 5.;5. -2.] - cg1 = [1.;3.] + Qg1 = [-2.0, 5.0; 5.0, -2.0] + cg1 = [1.0; 3.0] - Qg2 = [-6. 3.;3. 2.] - cg2 = [2.;1.] + Qg2 = [-6.0, 3.0; 3.0, 2.0] + cg2 = [2.0; 1.0] - return Qf,cf,Qg1,cg1,Qg2,cg2 + return Qf, cf, Qg1, cg1, Qg2, cg2 end ``` -The next function we'll define will take as input data for a particular quadratic function and the interval bounds on the decision variables, and construct an ``\alpha``BB convex relaxation of that function. Since we're solving a QCQP, we'll use the `eigvals` function to directly compute the eigenvalues of the input ``\mathbf Q_i`` matrix. +The next function we'll define will take as input data for a particular quadratic function and the interval bounds on the decision variables, and construct an ``\alpha``BB convex relaxation of that function. Since we're solving a QCQP, we'll use the [`eigvals`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.eigvals) function to directly compute the eigenvalues of the input ``\mathbf Q_i`` matrix. ```julia function αBB_relax(Q::Matrix{T}, c::Vector{T}, xL::Vector{T}, xU::Vector{T}, x::Real...) where {T<:Float64} - α = max(0,-minimum(eigvals(Q))/2) - y = [x[1];x[2]] - cv = 1/2*y'*Q*y+c'*y+α*(xL-y)'*(xU-y) + α = max(0.0, -minimum(eigvals(Q))/2) + y = [x[1]; x[2]] + cv = 1/2*y'*Q*y + c'*y + α*(xL - y)'*(xU - y) return cv end ``` @@ -67,30 +67,30 @@ function EAGO.lower_problem!(t::αBB_Convex, opt::GlobalOptimizer) xL = n.lower_variable_bounds[1:2] xU = n.upper_variable_bounds[1:2] # Get the problem data - Qf,cf,Qg1,cg1,Qg2,cg2=QCQP_setup() + Qf, cf, Qg1, cg1, Qg2, cg2 = QCQP_setup() # Define the JuMP model and declare the solver mL = JuMP.Model(JuMP.optimizer_with_attributes(Ipopt.Optimizer, "tol" => 1.0e-6, "print_level" => 0)) - @variable(mL,xL[i]<=x[i=1:2]<=xU[i]) + @variable(mL, xL[i] <= x[i=1:2] <= xU[i]) # Define the function closures for the user-defined relaxations - fcv(x...) = αBB_relax(Qf,cf,xL,xU,x...) - g1cv(x...) = αBB_relax(Qg1,cg1,xL,xU,x...) - g2cv(x...) = αBB_relax(Qg2,cg2,xL,xU,x...) + fcv(x...) = αBB_relax(Qf, cf, xL, xU, x...) + g1cv(x...) = αBB_relax(Qg1, cg1, xL, xU, x...) + g2cv(x...) = αBB_relax(Qg2, cg2, xL, xU, x...) # Register the user-defined functions - # Note: if the gradients and Hessians are directly available, they could + # Note: If the gradients and Hessians are directly available, they could # be passed as arguments to the register function to speed things up. - JuMP.register(mL,:fcv,2,fcv,autodiff=true) - JuMP.register(mL,:g1cv,2,g1cv,autodiff=true) - JuMP.register(mL,:g2cv,2,g2cv,autodiff=true) + JuMP.register(mL, :fcv, 2, fcv, autodiff=true) + JuMP.register(mL, :g1cv, 2, g1cv, autodiff=true) + JuMP.register(mL, :g2cv, 2, g2cv, autodiff=true) # Declare the objective function and constraints - @NLobjective(mL,Min,fcv(x[1],x[2])) - @NLconstraint(mL,g1cv(x[1],x[2])<=0.) - @NLconstraint(mL,g2cv(x[1],x[2])<=0.) + @NLobjective(mL, Min, fcv(x[1], x[2])) + @NLconstraint(mL, g1cv(x[1], x[2]) <= 0.0) + @NLconstraint(mL, g2cv(x[1], x[2]) <= 0.0) # Solve the relaxed problem JuMP.optimize!(mL) @@ -117,8 +117,7 @@ end !!! note - By default, EAGO solves the epigraph reformulation of your original problem, which increases the original problem dimensionality by +1 with the introduction of an auxiliary variable. When defining custom routines (such as the lower-bounding problem here) that are intended to work nicely with default EAGO routines (such as preprocessing), the user must account for the *new* dimensionality of the problem. In the code above, we wish to access the information of the specific B&B node and define an optimization problem based on that information. However, in this example, the node has information for 3 variables (the original 2 plus 1 for the auxiliary variable appended to the original variable vector) as ``(x_1,x_2,\eta)``. The lower-bounding problem was defined to optimize the relaxed problem with respect to the original 2 decision variables. When storing the results of this subproblem to the current B&B node, it is important to take care to store the information at the appropriate indices and not inadvertently redefine the problem dimensionality (i.e., by simply storing the optimization solution as the `lower_solution` of the current node). For problems that are defined to only branch on a subset of the original variables, the optimizer has a member `_sol_to_branch_map` that carries the mapping between the indices of the original variables to those of the variables being branched on. See the [Advanced-Use Example 1](@ref) to see how this is done. - + By default, EAGO solves the epigraph reformulation of your original problem, which increases the original problem dimensionality by +1 with the introduction of an auxiliary variable. When defining custom routines (such as the lower-bounding problem here) that are intended to work nicely with default EAGO routines (such as preprocessing), the user must account for the *new* dimensionality of the problem. In the code above, we wish to access the information of the specific B&B node and define an optimization problem based on that information. However, in this example, the node has information for 3 variables (the original 2 plus 1 for the auxiliary variable appended to the original variable vector) as ``(x_{1}, x_{2}, \eta)``. The lower-bounding problem was defined to optimize the relaxed problem with respect to the original 2 decision variables. When storing the results of this subproblem to the current B&B node, it is important to take care to store the information at the appropriate indices and not inadvertently redefine the problem dimensionality (i.e., by simply storing the optimization solution as the `lower_solution` of the current node). For problems that are defined to only branch on a subset of the original variables, the optimizer has a member `_sol_to_branch_map` that carries the mapping between the indices of the original variables to those of the variables being branched on. Visit our [quasiconvex example](@ref "Advanced-Use Example 1") to see how this is done. ## (Optional) Turn Off Processing Routines @@ -147,22 +146,22 @@ end Now, we'll tell EAGO to use our custom/extended solver, set up the main JuMP model, and solve it with our custom solver. ```julia -factory = () -> EAGO.Optimizer(SubSolvers(; t=αBB_Convex() )) +factory = () -> EAGO.Optimizer(SubSolvers(; t = αBB_Convex() )) m = JuMP.Model(optimizer_with_attributes(factory, "relative_tolerance" => 1e-3, "verbosity" => 1, "output_iterations" => 1, "branch_variable" => Bool[true; true], )) -Qf,cf,Qg1,cg1,Qg2,cg2=QCQP_setup() # Get QCQP data -xL = [-3.;-5.] # Lower bounds on x -xU = [1.; 2] # Upper bounds on x +Qf, cf, Qg1, cg1, Qg2, cg2 = QCQP_setup() # Get QCQP data +xL = [-3.0; -5.0] # Lower bounds on x +xU = [1.0; 2.0] # Upper bounds on x @variable(m, xL[i] <= x[i=1:2] <= xU[i]) # Define objective and constraints -@objective(m,Min,1/2*x'*Qf*x+cf'*x) -@constraint(m,1/2*x'*Qg1*x+cg1'*x<=0.) -@constraint(m,1/2*x'*Qg2*x+cg2'*x<=0.) +@objective(m, Min, 1/2*x'*Qf*x + cf'*x) +@constraint(m, 1/2*x'*Qg1*x + cg1'*x <= 0.0) +@constraint(m, 1/2*x'*Qg2*x + cg2'*x <= 0.0) # Solve the problem @time optimize!(m) @@ -170,7 +169,7 @@ xU = [1.; 2] # Upper bounds on x ## Retrieve Results -We then recover the objective value, the solution value, and termination status codes using standard JuMP syntax. +We then recover the objective value, the solution values, and termination status codes using standard JuMP syntax. ```julia println("x1* = ", JuMP.value(x[1]), " x2* = ", diff --git a/docs/src/examples/explicit_ann.md b/docs/src/examples/explicit_ann.md new file mode 100644 index 00000000..a5e7b7b6 --- /dev/null +++ b/docs/src/examples/explicit_ann.md @@ -0,0 +1,83 @@ +# Standard-Use Example 1 + +This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_explicit_ann.ipynb). + +### Solving an ANN to Optimality in EAGO + +In [[1](#References), [2](#References)], a surrogate artificial neural network (ANN) model of bioreactor productivity was constructed by fitting results from computationally expensive computational fluid dynamics (CFD) simulations. The authors then optimized this surrogate model to obtain ideal processing conditions. The optimization problem is given by: + +```math +\begin{aligned} +\max_{\mathbf x \in X} B_{2} + \sum_{r = 1}^{3} W_{2,r} \frac{2}{1 + \exp (-2y_{r} + B_{1,r})} \; \; {\rm where} \; \; y_{r} = \sum_{i = 1}^{8} W_{1,ir} x_{i} +\end{aligned} +``` + +## Input Parameters + +In the first block, we input parameters values supplied in the paper for ``W_1``, ``W_2``, ``B_1``, and ``B_2`` into Julia as simple array objects. We also input bounds for the variables which are used to scale the values obtained from optimization from ``[-1, 1]`` back into the design values. + +```julia +using JuMP, EAGO + +# Weights associated with the hidden layer +W1 = [0.54, -1.97, 0.09, -2.14, 1.01, -0.58, 0.45, 0.26; + -0.81, -0.74, 0.63, -1.60, -0.56, -1.05, 1.23, 0.93; + -0.11, -0.38, -1.19, 0.43, 1.21, 2.78, -0.06, 0.40] + +# Weights associated with the output layer +W2 = [-0.91, 0.11, 0.52] + +# Bias associated with the hidden layer +B1 = [-2.698, 0.012, 2.926] + +# Bias associated with the output layer +B2 = -0.46 + +# Variable bounds (Used to scale variables after optimization) +xLBD = [0.623, 0.093, 0.259, 6.56, 1114.0, 0.013, 0.127, 0.004] +xUBD = [5.89, 0.5, 1.0, 90.0, 25000.0, 0.149, 0.889, 0.049] +``` + +## Construct the JuMP Model and Optimize + +We now formulate the problem using standard JuMP [[3](#References)] syntax and optimize it. Note that we are using the [`@NLexpression`](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.@NLexpression) macro to handle the summation term to keep the code visually simple, but this could be placed directly in the [`@NLobjective`](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.@NLobjective) macro instead. + +```julia +# Model construction +model = Model(optimizer_with_attributes(EAGO.Optimizer, "absolute_tolerance" => 0.001)) +@variable(model, -1.0 <= x[i=1:8] <= 1.0) +@NLexpression(model, y[r=1:3], sum(W1[r,i]*x[i] for i in 1:8)) +@NLobjective(model, Max, B2 + sum(W2[r]*(2.0/(1 + exp(-2.0*y[r] + B1[r]))) for r=1:3)) + +# Solve the model +optimize!(model) +``` + +## Retrieve Results + +We then recover the objective value, the solution values, and termination status codes using standard JuMP syntax. The optimal value and solution values are then rescaled using the variable bounds to obtain their physical interpretations. + +```julia +# Access calculated values +fval = JuMP.objective_value(model) +xsol = JuMP.value.(x) +status_term = JuMP.termination_status(model) +status_prim = JuMP.primal_status(model) +println("EAGO terminated with a status of $status_term and a result code of $status_prim.") +println("The optimal value is: $(round(fval, digits=5)).") +println("The solution found is $(round.(xsol, digits=3)).") +println(" ") + +# Rescale values back to physical space +rescaled_fval = ((fval + 1.0)/2.0)*0.07 +rescaled_xsol = ((xsol .+ 1.0)./2.0).*(xUBD - xLBD) .+ xLBD +println("Rescaled optimal value and solution values:") +println("The rescaled optimal value is: $(round(rescaled_fval, digits=4))") +println("The rescaled solution is $(round.(rescaled_xsol, digits=3)).") +``` + +## References + +1. J. D. Smith, A. A. Neto, S. Cremaschi, and D. W. Crunkleton, CFD-based optimization of a flooded bed algae bioreactor, *Industrial & Engineering Chemistry Research*, 52 (2012), pp. 7181–7188. +2. A. M. Schweidtmann and A. Mitsos. Global Deterministic Optimization with Artificial Neural Networks Embedded [https://arxiv.org/pdf/1801.07114.pdf](https://arxiv.org/pdf/1801.07114.pdf). +3. Iain Dunning and Joey Huchette and Miles Lubin. JuMP: A Modeling Language for Mathematical Optimization, *SIAM Review*, 59 (2017), pp. 295-320. diff --git a/docs/src/examples/interval_bb.md b/docs/src/examples/interval_bb.md new file mode 100644 index 00000000..55b0eb44 --- /dev/null +++ b/docs/src/examples/interval_bb.md @@ -0,0 +1,140 @@ +# Standard-Use Example 2 + +This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_interval_bnb.ipynb). + +### Using EAGO's Basic Optimizer With User-Defined Subroutines + +In this section, we construct an optimizer that uses EAGO's basic NLP solution routine with user-defined lower and upper-bounding problems. The [`Optimizer`](@ref) structure supplies a number of parameters and stored structures that advanced users may find useful for constructing specialized solution routines. + +In this example, we'll forgo extensive integration into the [`Optimizer`](@ref) and simply replace the lower and upper-bounding problems to construct a B&B routine that solves the following problem to global optimality using bounds obtained from interval arithmetic: + +```math +\begin{aligned} +& \min_{\mathbf x \in X} \; \; \sin(x_{1}) x_{2}^{2} - \cos(x_{3}) / x_{4} \\ +& X = [-10, 10] \times [-1, 1] \times [-10, 10] \times [2, 20]. +\end{aligned} +``` + +We begin by importing EAGO, IntervalArithmetic [[1](#References)], and JuMP [[2](#References)]. + +```julia +using EAGO, IntervalArithmetic, JuMP +``` + +We now define the `IntervalExt` struct as a subtype of the [`ExtensionType`](@ref). + +```julia +struct IntervalExt <: EAGO.ExtensionType end +``` + +## Define a Custom Lower-Bounding Problem + +A valid lower bound is obtained from the lower bound of the natural interval extension using the [IntervalArithmetic.jl](https://github.com/JuliaIntervals/IntervalArithmetic.jl) [[1](#References)] package. The `LowerProblem` is dispatched using the new `IntervalExt` structure and the [`GlobalOptimizer`](@ref) structure, computes the bound using interval arithmetic, and stores the results to the appropriate field of the [`GlobalOptimizer`](@ref). Note that the problem is unconstrained on the domain so we can assume it is always feasible. Further, since the interval bound is constrained along the entire domain associated with a node, no additional cuts will be beneficial and thus we've disabled them using the `_cut_add_flag` field. + +```julia +import EAGO: lower_problem! +function lower_problem!(t::IntervalExt, x::EAGO.GlobalOptimizer) + # Retrieve bounds at current node + n = x._current_node + lower = n.lower_variable_bounds + upper = n.upper_variable_bounds + + # Define X for the node and compute the interval extension + x_value = Interval.(lower, upper) + F = sin(x_value[1])*x_value[2]^2 - cos(x_value[3])/x_value[4] + x._lower_objective_value = F.lo + x._lower_solution = IntervalArithmetic.mid.(x_value) + x._lower_feasibility = true + x._cut_add_flag = false + + return +end +``` + +## Define a Custom Upper-Bounding Problem + +Since the problem is unconstrained, any feasible point represents a valid upper bound. Thus, if we arbitrarily evaluate the function at the midpoint, we obtain a valid upper bound. This function constructs an upper bound in this manner then stores the results to the appropriate field of the [`GlobalOptimizer`](@ref). + +```julia +import EAGO.upper_problem! +function EAGO.upper_problem!(t::IntervalExt, x::EAGO.GlobalOptimizer) + # Retrieve bounds at current node + n = x._current_node + lower = n.lower_variable_bounds + upper = n.upper_variable_bounds + + # Compute midpoint value and evaluate at that point + x_value = 0.5*(upper + lower) + f_val = sin(x_value[1])*x_value[2]^2-cos(x_value[3])/x_value[4] + x._upper_objective_value = f_val + x._upper_solution = x_value + x._upper_feasibility = true + + return +end +``` + +## Disable Unnecessary Routines + +It is entirely possible to disable domain reduction by manipulating keyword arguments supplied to the optimizer. However, for simplicity's sake we'll simply overload the default preprocessing and postprocessing methods and indicate that there are no conditions under which EAGO should cut the node. + +```julia +import EAGO: preprocess!, postprocess!, cut_condition +function EAGO.preprocess!(t::IntervalExt, x::EAGO.GlobalOptimizer) + x._preprocess_feasibility = true + return +end +function EAGO.postprocess!(t::IntervalExt, x::EAGO.GlobalOptimizer) + x._postprocess_feasibility = true + return +end +EAGO.cut_condition(t::IntervalExt, x::EAGO.GlobalOptimizer) = false +``` + +## Construct the JuMP Model and Optimize + +We now add our optimizer to a JuMP [[2](#References)] model, provide variable bounds, and optimize. + +```julia +# Create a factory that specifies the interval extension in EAGO's SubSolver +factory = () -> EAGO.Optimizer(SubSolvers(; t = IntervalExt())) + +# Create a JuMP model using the factory, and with the absolute tolerance set by keyword argument +m = Model(optimizer_with_attributes(factory, + "absolute_tolerance" => 0.001 + )) + +# Add variables, bounds, and the objective function +x_L = [-10.0, -1.0, -10.0, 2.0] +x_U = [10.0, 1.0, 10.0, 20.0] +@variable(m, x_L[i] <= x[i=1:4] <= x_U[i]) +@NLobjective(m, Min, sin(x[1])*x[2]^2 - cos(x[3])/x[4]) + +# Perform the optimization +optimize!(m) +``` + +## Retrieve Results + +The objective value, solution, termination status, and primal status can then be accessed via the standard JuMP interface. + +```julia +fval = JuMP.objective_value(m) +xsol = JuMP.value.(x) +status_term = JuMP.termination_status(m) +status_prim = JuMP.primal_status(m) + +println("EAGO terminated with a status of $status_term and a result code of $status_prim") +println("The optimal value is: $(round(fval, digits=3)), the solution found is $(round.(xsol, digits=4)).") +``` + +## Advice for More Advanced Constructions + +The default [`lower_problem!`](@ref EAGO.lower_problem!) and [`upper_problem!`](@ref EAGO.upper_problem!) should be used as templates for error handling and retrieving information from MOI models. + +Essentially all of EAGO's subroutines stored to a field in the [`Optimizer`](@ref) structure can be reset as user-defined functions. + +## References + +1. IntervalArithmetic.jl \[Computer software\] (2019). Retrieved from https://github.com/JuliaIntervals/IntervalArithmetic.jl +2. Iain Dunning and Joey Huchette and Miles Lubin. JuMP: A Modeling Language for Mathematical Optimization, SIAM Review, SIAM 59 (2017), pp. 295-320. diff --git a/docs/src/examples/quasiconvex.md b/docs/src/examples/quasiconvex.md new file mode 100644 index 00000000..5cbf0390 --- /dev/null +++ b/docs/src/examples/quasiconvex.md @@ -0,0 +1,155 @@ +# Advanced-Use Example 1 + +This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/custom_quasiconvex.ipynb). + +### Customizing EAGO to Solve a Quasiconvex Problem + +In this example, we'll adapt EAGO to implement the bisection-based algorithm used to solve a quasiconvex optimization problem presented in [[1](#References)]: + +```math +\begin{aligned} +f^{*} = & \min_{\mathbf y \in Y} f(\mathbf y) \\ +{\rm s.t.} \; \; & \sum_{i = 1}^{5} i \cdot y_{i} - 5 = 0 \\ +& \sum_{i = 1}^{5} y_{i}^{2} - 0.5 \pi \leq 0 \\ +& -\bigg(\frac{1}{2} y_{1}^{2} + \frac{1}{2} y_{2}^{2} + 2 y_{1} y_{2} + 4 y_{1} y_{3} + 2 y_{2} y_{3} \bigg) \leq 0 \\ +& -y_{1}^{2} - 6 y_{1} y_{2} - 2 y_{2}^{2} + \cos (y_{1}) + \pi \leq 0 \\ +& Y = [0, 5]^{5} +\end{aligned} +``` + +where + +```math +\begin{aligned} +f(\mathbf y) = -\frac{\ln ((5 + y_{1})^{2} + \sum_{i = 1}^{5} y_{i})}{1 + \sum_{i = 1}^{5} y_{i}^{2}}. +\end{aligned} +``` + +Interval analysis shows that the objective value is bounded by the interval ``F`` such that ``f^{*} \in F = [f^{L}, f^{U}] = [-5, 0]``. Introducing an auxiliary variable ``t \in T = F`` allows the problem to be formulated as: + +```math +\begin{aligned} +t^{*} = & \min_{\mathbf y \in Y, t \in T} t \\ +{\rm s.t.} \; \; & (24) - (27) \\ +& f(\mathbf y) - t \leq 0 \\ +& Y = [0,5]^{2}, \; \; T = [-5,0]. +\end{aligned} +``` + +Let ``\phi_{\tau}(\mathbf y) = f(\mathbf y) - \tau`` such that ``\tau = (t^{L} + t^{U})/2``. We solve for ``\mathbf y`` subject to constraints ``(24) - (27)`` where ``\phi_{\tau} (\mathbf y) \leq 0``. If this is feasible, ``t^{*} \in [t^{L},\tau]``, else ``t^{*} \in [τ, t^{U}]``. The interval containing ``t^{*}`` is kept and the other is fathomed. This manner of bisection is repeated until an interval containing a feasible solution with a width of at most ``\epsilon`` is located [[2](#References)]. + +## Customizing EAGO's Script + +First, the preprocessing step, upper problem, and postprocessing routines are short-circuited as only a single optimization problem needs to be solved at each iteration. + +```julia +using MathOptInterface, EAGO, JuMP +import EAGO: Optimizer, GlobalOptimizer + +struct QuasiConvex <: EAGO.ExtensionType end +import EAGO: preprocess!, upper_problem!, postprocess! +function EAGO.preprocess!(t::QuasiConvex, x::GlobalOptimizer) + x._preprocess_feasibility = true +end +function EAGO.upper_problem!(t::QuasiConvex, x::GlobalOptimizer) + x._upper_feasibility = true +end +function EAGO.postprocess!(t::QuasiConvex, x::GlobalOptimizer) + x._postprocess_feasibility = true +end +``` + +Next, we specify that only an absolute tolerance should be checked for convergence and termination. + +```julia +import EAGO: convergence_check, termination_check +function EAGO.convergence_check(t::QuasiConvex, x::GlobalOptimizer) + gap = (x._upper_objective_value - x._lower_objective_value) + return (gap <= x._parameters.absolute_tolerance) +end +function EAGO.termination_check(t::QuasiConvex, x::GlobalOptimizer) + flag = EAGO.convergence_check(t, x) + if flag + x._end_state = EAGO.GS_OPTIMAL + x._termination_status_code = MathOptInterface.OPTIMAL + x._result_status_code = MathOptInterface.FEASIBLE_POINT + end + return flag +end +``` + +We then indicate that only the sixth variable, representing ``t``, should be branched on. Since we will apply our knowledge about which ``t^{*}`` should be kept in the lower problem definition, we also short-circuit the [`EAGO.repeat_check`](@ref) function here to tell EAGO not to branch this node, but instead to repeatedly evaluate it. + +```julia +import EAGO: repeat_check +branch_variable = [i == 6 for i=1:6] +EAGO.repeat_check(t::QuasiConvex, x::GlobalOptimizer) = true +``` + +In the lower problem, we then specify that the problem is to be solved locally for a fixed ``t`` value. The objective value is then updated and the problem is contracted in order to discard the region which is known to not contain the optimal value. + +```julia +import EAGO: lower_problem! +function EAGO.lower_problem!(t::QuasiConvex, x::GlobalOptimizer) + y = x._current_node + indx = x._sol_to_branch_map[6] + lower = y.lower_variable_bounds[indx] + upper = y.upper_variable_bounds[indx] + midy = (lower + upper)/2.0 + y.lower_variable_bounds[indx] = midy + y.upper_variable_bounds[indx] = midy + EAGO.solve_local_nlp!(x) + feas = x._upper_feasibility + y.lower_variable_bounds[indx] = feas ? lower : midy + y.upper_variable_bounds[indx] = feas ? midy : upper + x._lower_objective_value = y.lower_variable_bounds[indx] + x._upper_objective_value = y.upper_variable_bounds[indx] + x._lower_feasibility = true + return +end +``` + +We now define the optimizer factory to extend the core EAGO optimizer for this special problem. The [`SubSolvers`](@ref) constructor is used to set the extension type (`t`), as well as the relaxed optimizer (`r`) and upper-bounding optimizer (`u`), if necessary. In this case, we will use the default solvers and only set the extension type. + +```julia +factory = () -> Optimizer(SubSolvers(; t = QuasiConvex())) +``` + +## Construct the JuMP Model and Optimize + +We now build the JuMP [[3](#References)] model representing this problem, solve it, and retrieve the solution. + +```julia +opt = optimizer_with_attributes(factory, + "absolute_tolerance" => 1E-8, + "branch_variable" => branch_variable, + "iteration_limit" => 1000) +m = Model(opt) +@variable(m, ((i<6) ? 0.0 : -5.0) <= y[i=1:6] <= ((i<6) ? 5.0 : 0.0)) +@constraint(m, sum(i*y[i] for i=1:5) - 5.0 == 0.0) +@constraint(m, sum(y[i]^2 for i=1:5) - 0.5*pi^2 <= 0.0) +@expression(m, expr1, 2.0*y[1]*y[2] + 4.0*y[1]*y[3] + 2.0*y[2]*y[3]) +@constraint(m, -(0.5*y[1]^2 + 0.5*y[2]^2 + y[3]^2 + expr1) <= 0.0) +@NLexpression(m, expr2, log((5.0 + y[1])^2 + sum(y[i] for i=1:5))) +@NLconstraint(m, -y[1]^2 - 6.0*y[1]*y[2] - 2.0*y[2]^2 + cos(y[1]) + pi <= 0.0) +@NLconstraint(m, -expr2/(1.0 + sum(y[i]^2 for i=1:5)) - y[6] <= 0.0) +@objective(m, Min, y[6]) + +JuMP.optimize!(m) +``` + +## Retrieve Results + +We then recover the solution values and the objective value using standard JuMP syntax. + +```julia +solution = JuMP.value.(y[1:5]) +global_obj_value = JuMP.value.(y[6]) +print("Global solution at y*=$solution with a value of f*=$global_obj_value") +``` + +## References + +1. C. Jansson, Quasiconvex relaxations based on interval arithmetic, Linear Algebra and its Applications, 324 (2001), pp. 27–53. +2. S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004. +3. Iain Dunning and Joey Huchette and Miles Lubin. JuMP: A Modeling Language for Mathematical Optimization, *SIAM Review*, 59 (2017), pp. 295-320. diff --git a/docs/src/index.md b/docs/src/index.md index 6c081b52..9c67386c 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -11,9 +11,9 @@ A development environment for robust and global optimization in Julia. - Current Position: Alexion Pharmaceuticals - [Robert Gottlieb](https://psor.uconn.edu/person/robert-gottlieb/), Department of Chemical and Biomolecular Engineering, University of Connecticut (UConn) - [Dimitri Alston](https://psor.uconn.edu/person/dimitri-alston/), Department of Chemical and Biomolecular Engineering, University of Connecticut (UConn) -- [Matthew Stuber](https://chemical-biomolecular.engr.uconn.edu/person/matthew-stuber/), Associate Professor, University of Connecticut (UConn) +- [Matthew Stuber](https://chemical-biomolecular.engr.uconn.edu/people/faculty/stuber-matthew/), Associate Professor, University of Connecticut (UConn) -If you would like to contribute, [contact us](https://psorlab.github.io/EAGO.jl/stable/dev/contributing/). +If you would like to contribute, [contact us](@ref "How to Contribute to EAGO"). ## Overview @@ -21,21 +21,19 @@ EAGO is a global and robust optimization platform based on McCormick relaxations ## Installing EAGO -EAGO is a registered Julia package and it can be installed using the Julia package manager. -From the Julia REPL, type `]` to enter the Package manager (Pkg) mode and run the following command: +EAGO is a registered Julia package and it can be installed using the Julia package manager. From the Julia REPL, type `]` to enter the Package manager (Pkg) mode and run the following command: ```jldoctest pkg> add EAGO ``` -Currently, EAGO is compatible with version 1.12 of JuMP. This allows a replication of some of the internal features shared by EAGO and JuMP's automatic differentiation scheme, e.g., generation of Wengert Tapes, passing evaluators between JuMP and EAGO, etc. +Currently, EAGO is compatible with version 1.12 of [JuMP](https://github.com/jump-dev/JuMP.jl). This allows a replication of some of the internal features shared by EAGO and JuMP's automatic differentiation scheme, e.g., generation of Wengert Tapes, passing evaluators between JuMP and EAGO, etc. ```jldoctest pkg> add JuMP ``` -EAGO v0.8.1 is the current tagged version and requires Julia 1.6+ for full functionality (however Julia 1.0+ versions support partial functionality). Use with version 1.8 is recommended as the majority of in-house testing has occurred using this version of Julia. The user is directed to the [High-Performance Configuration](https://psorlab.github.io/EAGO.jl/optimizer/high_performance/) for instructions on how to install a high performance version of EAGO (rather than the basic entirely open-source version). -If any issues are encountered when loading EAGO (or when using it), please submit an issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues). +EAGO v0.8.1 is the current tagged version and requires Julia 1.6+ for full functionality (however Julia 1.0+ versions support partial functionality). Use with version 1.8 is recommended as the majority of in-house testing has occurred using this version of Julia. The user is directed to the [High-Performance Configuration](https://psorlab.github.io/EAGO.jl/optimizer/high_performance/) for instructions on how to install a high performance version of EAGO (rather than the basic entirely open-source version). If any issues are encountered when loading EAGO (or when using it), please submit an issue using the GitHub [issue tracker](https://github.com/PSORLab/EAGO.jl/issues). ## Examples diff --git a/docs/src/jump/README.md b/docs/src/jump/README.md new file mode 100644 index 00000000..e7ffbde5 --- /dev/null +++ b/docs/src/jump/README.md @@ -0,0 +1,141 @@ + + +# EAGO - Easy Advanced Global Optimization + +EAGO is an open-source development environment for **robust and global optimization** in Julia. See the full [README](https://github.com/PSORLab/EAGO.jl/blob/master/README.md) for more information. + +| **PSOR Lab** | **Current Version** | **Build Status** | **Documentation** | +|:------------:|:-------------------:|:----------------:|:-----------------:| +| [![](https://img.shields.io/badge/Developed_by-PSOR_Lab-342674)](https://psor.uconn.edu/) | [![](https://docs.juliahub.com/EAGO/version.svg)](https://juliahub.com/ui/Packages/General/EAGO) | [![Build Status](https://github.com/PSORLab/EAGO.jl/workflows/CI/badge.svg?branch=master)](https://github.com/PSORLab/EAGO.jl/actions?query=workflow%3ACI) [![codecov](https://codecov.io/gh/PSORLab/EAGO.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/PSORLab/EAGO.jl)| [![](https://img.shields.io/badge/docs-latest-blue.svg)](https://PSORLab.github.io/EAGO.jl/dev) | + +EAGO is a deterministic global optimizer designed to address a wide variety of optimization problems, emphasizing nonlinear programs (NLPs), by propagating McCormick relaxations along the factorable structure of each expression in the NLP. Most operators supported by modern automatic differentiation (AD) packages are supported by EAGO and a number utilities for sanitizing native Julia code and generating relaxations on a wide variety of user-defined functions have been included. Currently, EAGO supports problems that have a priori variable bounds defined and have differentiable constraints. That is, problems should be specified in the generic form below: + +$$ +\begin{align*} +f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\ +{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0 \\ +& \mathbf g(\mathbf y) \leq \mathbf 0 \\ +& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb{IR}^{n} \\ +& \qquad \mathbf y^{\mathbf L}, \mathbf y^{\mathbf U} \in \mathbb R^{n} +\end{align*} +$$ + +For each nonlinear term, EAGO makes use of factorable representations to construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a list is generated and rules for constructing McCormick relaxations are used to formulate relaxations in the original decision space, $X$ [[1](#references)]: + +- $v_{1} = x$ +- $v_{2} = v_{1} - 5$ +- $v_{3} = \sin(v_{1})$ +- $v_{4} = v_{1} v_{2}$ +- $v_{5} = v_{4} v_{3}$ +- $f(x) = v_{5}$ + +

+ + +Either these original relaxations, differentiable McCormick relaxations [[2](#references)], or affine relaxations thereof can be used to construct relaxations of optimization problems useful in branch and bound routines for global optimization. Utilities are included to combine these with algorithms for relaxing implicit functions [[3](#references)] and forward-reverse propagation of McCormick arithmetic [[4](#references)]. + +## License + +EAGO is licensed under the [MIT License](https://github.com/PSORLab/EAGO.jl/blob/master/LICENSE.md). + +## Installation + +EAGO is a registered Julia package and it can be installed using the Julia package manager: + +```julia +import Pkg +Pkg.add("EAGO") +``` + +## Use with JuMP + +EAGO makes use of JuMP to improve the user's experience in setting up optimization models. Consider the "process" problem instance from [[5](#references)]: + +$$ +\begin{align*} +& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\ +{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\ +& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0 \\ +& -(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0 \\ +& -(x_{2} + x_{5}) / x_{1} + x_{8} = 0 \\ +& -x_{1} + 1.22 x_{4} - x_{5} = 0 \\ +& x_{9} + 0.222 x_{10} - 35.82 = 0 \\ +& -3.0 x_{7} + x_{10} + 133.0 = 0 \\ +& X = [10, 2000] \times [0, 16000] \times [0, 120] \times [0, 5000] \\ +& \qquad \times [0, 2000] \times [85, 93] \times [90,9 5] \times [3, 12] \times [1.2, 4] \times [145, 162] +\end{align*} +$$ + +This model can be formulated in Julia as: + +```julia +using JuMP, EAGO + +# Build model using EAGO's optimizer +m = Model(EAGO.Optimizer) + +# Define bounded variables +xL = [10.0; 0.0; 0.0; 0.0; 0.0; 85.0; 90.0; 3.0; 1.2; 145.0] +xU = [2000.0; 16000.0; 120.0; 5000.0; 2000.0; 93.0; 95.0; 12.0; 4.0; 162.0] +@variable(m, xL[i] <= x[i=1:10] <= xU[i]) + +# Define nonlinear constraints +@NLconstraint(m, e1, -x[1]*(1.12 + 0.13167*x[8] - 0.00667*(x[8])^2) + x[4] == 0.0) +@NLconstraint(m, e3, -0.001*x[4]*x[9]*x[6]/(98.0 - x[6]) + x[3] == 0.0) +@NLconstraint(m, e4, -(1.098*x[8] - 0.038*(x[8])^2) - 0.325*x[6] + x[7] == 57.425) +@NLconstraint(m, e5, -(x[2] + x[5])/x[1] + x[8] == 0.0) + +# Define linear constraints +@constraint(m, e2, -x[1] + 1.22*x[4] - x[5] == 0.0) +@constraint(m, e6, x[9] + 0.222*x[10] == 35.82) +@constraint(m, e7, -3.0*x[7] + x[10] == -133.0) + +# Define nonlinear objective +@NLobjective(m, Max, 0.063*x[4]*x[7] - 5.04*x[1] - 0.035*x[2] - 10*x[3] - 3.36*x[5]) + +# Solve the optimization problem +JuMP.optimize!(m) +``` + +## Documentation + +EAGO has numerous features: a solver accessible from JuMP/MathOptInterface (MOI), domain reduction routines, McCormick relaxations, and specialized nonconvex semi-infinite program solvers. A full description of all features can be found on the [documentation website](https://psorlab.github.io/EAGO.jl/dev/). A series of example have been provided in the documentation and in the form of Jupyter Notebooks in the separate [EAGO-notebooks](https://github.com/PSORLab/EAGO-notebooks) repository. + +## A Cautionary Note on Global Optimization + +As a global optimization platform, EAGO's solvers can be used to find solutions of general nonconvex problems with a guaranteed certificate of optimality. However, global solvers suffer from the curse of dimensionality and therefore their performance is outstripped by convex/local solvers. For users interested in large-scale applications, be warned that problems generally larger than a few variables may prove challenging for certain types of global optimization problems. + +## Citing EAGO + +Please cite the following paper when using EAGO. In plain text form this is: + +``` +Wilhelm, M.E. and Stuber, M.D. EAGO.jl: easy advanced global optimization in Julia. +Optimization Methods and Software. 37(2): 425-450 (2022). DOI: 10.1080/10556788.2020.1786566 +``` + +As a BibTeX entry: + +```bibtex +@article{doi:10.1080/10556788.2020.1786566, +author = {Wilhelm, M.E. and Stuber, M.D.}, +title = {EAGO.jl: easy advanced global optimization in Julia}, +journal = {Optimization Methods and Software}, +volume = {37}, +number = {2}, +pages = {425-450}, +year = {2022}, +publisher = {Taylor & Francis}, +doi = {10.1080/10556788.2020.1786566}, +URL = {https://doi.org/10.1080/10556788.2020.1786566}, +eprint = {https://doi.org/10.1080/10556788.2020.1786566} +} +``` + +## References + +1. Mitsos, A., Chachuat, B., and Barton, P.I. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*. 20(2): 573–601 (2009). +2. Khan, K.A., Watson, H.A.J., and Barton, P.I. **Differentiable McCormick relaxations.** *Journal of Global Optimization*. 67(4): 687-729 (2017). +3. Stuber, M.D., Scott, J.K., and Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optimization Methods and Software* 30(3): 424–460 (2015). +4. Wechsung, A., Scott, J.K., Watson, H.A.J., and Barton, P.I. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1): 1-36 (2015). +5. Bracken, J., and McCormick, G.P. *Selected Applications of Nonlinear Programming.* John Wiley and Sons, New York (1968). \ No newline at end of file diff --git a/docs/src/mccormick/operators.md b/docs/src/mccormick/operators.md index cde3972b..c277c973 100644 --- a/docs/src/mccormick/operators.md +++ b/docs/src/mccormick/operators.md @@ -1,10 +1,6 @@ # Currently Supported Operators -The operators currently supported are listed below. The operators with a check box -have been subject to a large degree of scrutiny and have been implemented for -both forward and reverse McCormick relaxations ([Wechsung2015](https://link.springer.com/article/10.1007/s10898-015-0303-6)). Each McCormick object is associated with a -parameter `T <: RelaxTag` which is either `NS` for nonsmooth relaxations ([Mitsos2009](https://epubs.siam.org/doi/abs/10.1137/080717341), [Scott2011](https://link.springer.com/article/10.1007/s10898-011-9664-7)), `MV` for multivariate relaxations ([Tsoukalas2014](https://link.springer.com/article/10.1007/s10898-014-0176-0), [Najman2017](https://link.springer.com/article/10.1007/s10898-016-0470-0)), -or `Diff` for differentiable relaxations ([Khan2016](https://link.springer.com/article/10.1007/s10898-016-0440-6), [Khan2018](https://link.springer.com/article/10.1007/s10898-017-0601-2), [Khan2019](https://www.tandfonline.com/doi/abs/10.1080/02331934.2018.1534108)). Conversion between `NS`, `MV`, and `Diff` relax tags is not currently supported. Convex and concave envelopes are used to compute relaxations of univariate functions. +The operators currently supported are listed below. The operators with a check box have been subject to a large degree of scrutiny and have been implemented for both forward and reverse McCormick relaxations ([Wechsung2015](https://link.springer.com/article/10.1007/s10898-015-0303-6)). Each McCormick object is associated with a parameter `T <: RelaxTag` which is either `NS` for nonsmooth relaxations ([Mitsos2009](https://epubs.siam.org/doi/abs/10.1137/080717341), [Scott2011](https://link.springer.com/article/10.1007/s10898-011-9664-7)), `MV` for multivariate relaxations ([Tsoukalas2014](https://link.springer.com/article/10.1007/s10898-014-0176-0), [Najman2017](https://link.springer.com/article/10.1007/s10898-016-0470-0)), or `Diff` for differentiable relaxations ([Khan2016](https://link.springer.com/article/10.1007/s10898-016-0440-6), [Khan2018](https://link.springer.com/article/10.1007/s10898-017-0601-2), [Khan2019](https://www.tandfonline.com/doi/abs/10.1080/02331934.2018.1534108)). Conversion between `NS`, `MV`, and `Diff` relax tags is not currently supported. Convex and concave envelopes are used to compute relaxations of univariate functions. ## Univariate McCormick Operators @@ -30,7 +26,7 @@ Both nonsmooth and Whitney-1 (once differentiable) relaxations are supported for ## Bivariate McCormick Operators -The following bivariate operators are supported for two [`MC`](@ref) objects. Both nonsmooth and Whitney-1 (once differentiable) relaxations are supported. +The following bivariate operators are supported for two [`MC`](@ref McCormick.MC) objects. Both nonsmooth and Whitney-1 (once differentiable) relaxations are supported. - **Multiplication** (`*`) - **Division** (`/`) @@ -44,9 +40,8 @@ Arbitrarily differentiable relaxations can be constructed for the following oper ## Common Subexpressions -The following functions can be used in place of common subexpressions encountered -in optimization and will result in improved performance (in each case, the standard -McCormick composition rules are often more expansive). +The following functions can be used in place of common subexpressions encountered in optimization and will result in improved performance (in each case, the standard McCormick composition rules are often more expansive). + ```@docs xexpax arh @@ -56,12 +51,8 @@ mm ## Bound Setting Functions -The following functions are used to specify that known bounds on a subexpression -exist and that the relaxation/interval bounds propagated should make use of this -information. The utility functions can be helpful in avoiding domain violations -that arise due to the overly expansive nature of composite relaxations. Improper use -of these functions may lead to cases in which the resulting relaxations are empty, -so the user is encouraged to use discretion. +The following functions are used to specify that known bounds on a subexpression exist and that the relaxation/interval bounds propagated should make use of this information. The utility functions can be helpful in avoiding domain violations that arise due to the overly expansive nature of composite relaxations. Improper use of these functions may lead to cases in which the resulting relaxations are empty, so the user is encouraged to use discretion. + ```@docs positive negative diff --git a/docs/src/mccormick/overview.md b/docs/src/mccormick/overview.md index 37a224f1..3f633818 100644 --- a/docs/src/mccormick/overview.md +++ b/docs/src/mccormick/overview.md @@ -1,9 +1,8 @@ # Overview -EAGO provides a library of McCormick relaxations in native Julia code. The EAGO optimizer supports -relaxing functions using **nonsmooth McCormick relaxations** ([Mitsos2009](https://epubs.siam.org/doi/abs/10.1137/080717341), [Scott2011](https://link.springer.com/article/10.1007/s10898-011-9664-7)), **smooth McCormick relaxations** ([Khan2016](https://link.springer.com/article/10.1007/s10898-016-0440-6), [Khan2018](https://link.springer.com/article/10.1007/s10898-017-0601-2), [Khan2019](https://www.tandfonline.com/doi/abs/10.1080/02331934.2018.1534108)), and **multi-variant McCormick relaxations** ([Tsoukalas2014](https://link.springer.com/article/10.1007/s10898-014-0176-0); a variant of **subgradient-based interval refinement** ([Najman2017](https://link.springer.com/article/10.1007/s10898-016-0470-0))). For functions with arbitrarily differentiable relaxations, the differentiable constant μ can be modified by adjusting a constant value in the package. Additionally, validated and nonvalidated interval bounds are supported via [IntervalArithmetic.jl](https://github.com/JuliaIntervals/IntervalArithmetic.jl) which is reexported. The basic McCormick operator and reverse McCormick operator ([Wechsung2015](https://link.springer.com/article/10.1007/s10898-015-0303-6)) libraries are included in two dependent subpackages which can loaded and used independently: -- [McCormick.jl](https://github.com/PSORLab/McCormick.jl): A library of forward-mode and implicit McCormick operators. -- [ReverseMcCormick.jl](https://github.com/PSORLab/ReverseMcCormick.jl): A reverse-mode McCormick operator library. +EAGO provides a library of McCormick relaxations in native Julia code. The EAGO optimizer supports relaxing functions using **nonsmooth McCormick relaxations** ([Mitsos2009](https://epubs.siam.org/doi/abs/10.1137/080717341), [Scott2011](https://link.springer.com/article/10.1007/s10898-011-9664-7)), **smooth McCormick relaxations** ([Khan2016](https://link.springer.com/article/10.1007/s10898-016-0440-6), [Khan2018](https://link.springer.com/article/10.1007/s10898-017-0601-2), [Khan2019](https://www.tandfonline.com/doi/abs/10.1080/02331934.2018.1534108)), and **multi-variant McCormick relaxations** ([Tsoukalas2014](https://link.springer.com/article/10.1007/s10898-014-0176-0); a variant of **subgradient-based interval refinement** ([Najman2017](https://link.springer.com/article/10.1007/s10898-016-0470-0))). For functions with arbitrarily differentiable relaxations, the differentiable constant μ can be modified by adjusting a constant value in the package. Additionally, validated and nonvalidated interval bounds are supported via [IntervalArithmetic.jl](https://github.com/JuliaIntervals/IntervalArithmetic.jl) which is reexported. The basic McCormick operator and reverse McCormick operator ([Wechsung2015](https://link.springer.com/article/10.1007/s10898-015-0303-6)) libraries are included in two dependent subpackages which can loaded and used independently: +- [McCormick.jl](https://github.com/PSORLab/McCormick.jl): A forward McCormick operator library +- [ReverseMcCormick.jl](https://github.com/PSORLab/ReverseMcCormick.jl): A reverse McCormick operator library ## NaN Numerics diff --git a/docs/src/mccormick/type.md b/docs/src/mccormick/type.md index 9d356da1..32c2de35 100644 --- a/docs/src/mccormick/type.md +++ b/docs/src/mccormick/type.md @@ -1,7 +1,7 @@ # Types ```@docs -MC +McCormick.MC McCormick.RelaxTag ``` @@ -28,10 +28,7 @@ McCormick.golden_section ## (Under Development) MCNoGrad -A handful of applications make use of McCormick relaxations directly without the need for subgradients. We -are currently adding support for a McCormick `struct` which omits subgradient propagation in favor of return -a MCNoGrad object and associated derivative information. This is currently under development and likely -lacking key functionality. +A handful of applications make use of McCormick relaxations directly without the need for subgradients. We are currently adding support for a McCormick `struct` which omits subgradient propagation in favor of return a MCNoGrad object and associated derivative information. This is currently under development and likely lacking key functionality. ```@docs McCormick.MCNoGrad diff --git a/docs/src/mccormick/usage.md b/docs/src/mccormick/usage.md index 79c70104..c1e62aec 100644 --- a/docs/src/mccormick/usage.md +++ b/docs/src/mccormick/usage.md @@ -1,29 +1,26 @@ # Basic Usage -## Bounding a Function via McCormick Operators +## Bounding a Univariate Function -In order to bound a function using a McCormick relaxation, you first construct a -McCormick object (`x::MC`) that bounds the input variables, and then you pass these -variables to the desired function. +In order to bound a function using a McCormick relaxation, you first construct a McCormick object (`x::MC`) that bounds the input variables, and then you pass these variables to the desired function. -In the example below, convex/concave relaxations of the function `f(x) = x(x-5)sin(x)` -are calculated at `x = 2` on the interval `[1,4]`. +In the example below, convex/concave relaxations of the function ``f(x) = x (x - 5) \sin(x)`` are calculated at ``x = 2`` on the interval ``[1, 4]``. ```julia using McCormick -# Create MC object for x = 2.0 on [1.0,4.0] for relaxing +# Create MC object for x = 2.0 on [1.0, 4.0] for relaxing # a function f(x) on the interval Intv -f(x) = x*(x-5.0)*sin(x) +f(x) = x*(x - 5.0)*sin(x) x = 2.0 # Value of independent variable x -Intv = Interval(1.0,4.0) # Define interval to relax over +Intv = Interval(1.0, 4.0) # Define interval to relax over # Note that McCormick.jl reexports IntervalArithmetic.jl # and StaticArrays. So no using statement for these is # necessary. # Create McCormick object -xMC = MC{1,NS}(x,Intv,1) +xMC = MC{1,NS}(x, Intv, 1) fMC = f(xMC) # Relax the function @@ -34,40 +31,44 @@ ccgrad = fMC.cc_grad # Subgradient/gradient of concave relaxation Iv = fMC.Intv # Retrieve interval bounds of f(x) on Intv ``` -By plotting the results we can easily visualize the convex and concave -relaxations, interval bounds, and affine bounds constructed using the subgradient -at the middle of X. +By plotting the results we can easily visualize the convex and concave relaxations, interval bounds, and affine bounds constructed using the subgradient at the middle of ``X``. ![Figure_1](Figure_1.png) -If we instead use the constructor `xMC = MC{1,Diff}(x,Intv,1)` in the above code and re-plot, -we arrive at the following graph. Note that these relaxations are differentiable, but not as -tight as the nonsmooth relaxations. +If we instead use the constructor `xMC = MC{1,Diff}(x, Intv, 1)` in the above code and re-plot, we arrive at the following graph. Note that these relaxations are differentiable, but not as tight as the nonsmooth relaxations. ![Figure_2](Figure_2.png) -This can readily be extended to multivariate functions, for example, `f(x,y) = (4 - 2.1x^2 + (x^4)/6)x^2 + xy + (-4 + 4y^2)y^2`: +## Bounding a Multivariate Function + +This can readily be extended to multivariate functions, for example: + +```math +\begin{aligned} +f(x,y) = \big(4 - 2.1 x^{2} + \frac{x^{4}}{6} \big) x^{2} + x y + (-4 + 4 y^{2}) y^{2} +\end{aligned} +``` ```julia using McCormick # Define function -f(x,y) = (4.0 - 2.1*x^2 + (x^4)/6.0)*x^2 + x*y + (-4.0 + 4.0*y^2)*y^2 +f(x, y) = (4.0 - 2.1*x^2 + (x^4)/6.0)*x^2 + x*y + (-4.0 + 4.0*y^2)*y^2 # Define intervals for independent variables n = 30 X = Interval{Float64}(-2,0) -Y = Interval{Float64}(-0.5,0.5) -xrange = range(X.lo,stop=X.hi,length=n) -yrange = range(Y.lo,stop=Y.hi,length=n) +Y = Interval{Float64}(-0.5, 0.5) +xrange = range(X.lo, stop=X.hi, length=n) +yrange = range(Y.lo, stop=Y.hi, length=n) # Calculate differentiable McCormick relaxation for (i,x) in enumerate(xrange) for (j,y) in enumerate(yrange) - z = f(x,y) # Calculate function values - xMC = MC{1,Diff}(x,X,1) # Differentiable relaxation for x - yMC = MC{1,Diff}(y,Y,2) # Differentiable relaxation for y - fMC = f(xMC,yMC) # Relax the function + z = f(x, y) # Calculate function values + xMC = MC{1,Diff}(x, X, 1) # Differentiable relaxation for x + yMC = MC{1,Diff}(y, Y, 2) # Differentiable relaxation for y + fMC = f(xMC, yMC) # Relax the function cv = fMC.cv # Convex relaxation cc = fMC.cc # Concave relaxation end diff --git a/docs/src/news.md b/docs/src/news.md index ee7ed449..a26fcfd0 100644 --- a/docs/src/news.md +++ b/docs/src/news.md @@ -44,7 +44,7 @@ ## [v0.7.0](https://github.com/PSORLab/EAGO.jl/releases/tag/v0.7.0) (March 28, 2022) - Added envelopes of activation functions: `xabsx`, `logcosh` -- Added `estimator_extrema`, `estimator_under`, and `estimator_over` functions for McCormick relaxations. +- Added variations of `estimator_extrema`, `estimator_under`, and `estimator_over` functions to EAGO for bilinear relaxations. - Moved various functions and related structures to new files. - Added `RelaxCache` structure to hold relaxed problem information. - Updated forward and reverse propagation. diff --git a/docs/src/optimizer/bnb_back.md b/docs/src/optimizer/bnb_back.md index 7dc9fb71..5fb91556 100644 --- a/docs/src/optimizer/bnb_back.md +++ b/docs/src/optimizer/bnb_back.md @@ -1,7 +1,6 @@ # EAGO's Branch and Bound Routine -This component is meant to provide a flexible framework for implementing spatial branch-and-bound based optimization routines in Julia. -All components of the branch-and-bound routine can be customized by the individual user: lower bounding problem, upper bounding problem. +This component is meant to provide a flexible framework for implementing spatial branch-and-bound based optimization routines in Julia. All components of the branch-and-bound routine can be customized by the individual user: lower-bounding problem, upper-bounding problem. ## Branch and Bound Node Storage @@ -9,7 +8,7 @@ All components of the branch-and-bound routine can be customized by the individu EAGO.NodeBB ``` -The global optimizer structure holds all information relevant to branch-and-bound. +The `GlobalOptimizer` structure holds all information relevant to branch-and-bound. ```@docs EAGO.GlobalOptimizer @@ -51,7 +50,7 @@ The global optimizer structure holds all information relevant to branch-and-boun EAGO.termination_check(t::ExtensionType, m::GlobalOptimizer) EAGO.upper_problem!(t::ExtensionType, m::GlobalOptimizer) EAGO.parse_global!(t::ExtensionType, m::GlobalOptimizer) - EAGO.optimize_hook!(t::ExtensionType, m::GlobalOptimizer) + EAGO.optimize_hook!(t::ExtensionType, m::Optimizer) ``` ## Internal Subroutines @@ -59,7 +58,7 @@ The global optimizer structure holds all information relevant to branch-and-boun ```@docs EAGO.is_integer_subproblem(m) EAGO.is_integer_feasible_local(m::GlobalOptimizer, d) -│ EAGO.is_integer_feasible_relaxed(m::GlobalOptimizer) + EAGO.is_integer_feasible_relaxed(m::GlobalOptimizer) EAGO.interval_bound EAGO.lower_interval_bound EAGO.same_box(x::NodeBB,y::NodeBB, atol::Float64) @@ -71,7 +70,7 @@ The global optimizer structure holds all information relevant to branch-and-boun EAGO.label_branch_variables!(m::GlobalOptimizer) EAGO.add_nonlinear!(m::GlobalOptimizer) EAGO.parse_classify_problem!(m::GlobalOptimizer) - EAGO.local_problem_status!(t::MathOptInterface.TerminationStatusCode, r::MathOptInterface.ResultStatusCode) + EAGO.local_problem_status(t::MathOptInterface.TerminationStatusCode, r::MathOptInterface.ResultStatusCode) ``` ## Functions for Generating Console Output diff --git a/docs/src/optimizer/domain_reduction.md b/docs/src/optimizer/domain_reduction.md index fb8fc173..8e113187 100644 --- a/docs/src/optimizer/domain_reduction.md +++ b/docs/src/optimizer/domain_reduction.md @@ -10,8 +10,7 @@ variable_dbbt! ## Special Forms -Bound tightening for linear forms, univariate quadratic forms, and -bivariate quadratic forms are also supported. +Bound tightening for linear forms, univariate quadratic forms, and bivariate quadratic forms are also supported. ```@docs EAGO.fbbt! @@ -19,10 +18,7 @@ EAGO.fbbt! ## Constraint Propagation -EAGO contains a constraint propagation architecture that supported forward and -reverse evaluation of set-valued functions on the directed acyclic graph. -The interval contractor and reverse McCormick relaxation-based contractors are -currently available. +EAGO contains a constraint propagation architecture that supported forward and reverse evaluation of set-valued functions on the directed acyclic graph. The interval contractor and reverse McCormick relaxation-based contractors are currently available. ```@docs EAGO.set_constraint_propagation_fbbt! @@ -30,9 +26,7 @@ EAGO.set_constraint_propagation_fbbt! ## Optimization-Based Bound Tightening -EAGO makes use of an optimization-based bound tightening scheme using filtering -and greedy ordering as detailed in: Gleixner, A.M., Berthold, T., Müller, B. -et al. J Glob Optim (2017) 67: 731. [https://doi.org/10.1007/s10898-016-0450-](https://doi.org/10.1007/s10898-016-0450-4). +EAGO makes use of an optimization-based bound tightening scheme using filtering and greedy ordering as detailed in [[1](#References)]. ```@docs EAGO.obbt! @@ -40,3 +34,7 @@ EAGO.trivial_filtering!(m::GlobalOptimizer{R,S,Q}, n::NodeBB) where {R,S,Q<:Exte EAGO.aggressive_filtering!(m::GlobalOptimizer{R,S,Q}, n::NodeBB) where {R,S,Q<:ExtensionType} EAGO.bool_indx_diff!(z::Vector{Bool},x::Vector{Bool}, y::Vector{Bool}) ``` + +## References + +1. Gleixner, A.M., Berthold, T., Müller, B. et al. J Glob Optim (2017) 67: 731. [https://doi.org/10.1007/s10898-016-0450-4](https://doi.org/10.1007/s10898-016-0450-4) diff --git a/docs/src/optimizer/high_performance.md b/docs/src/optimizer/high_performance.md index bf534e27..2c916b1d 100644 --- a/docs/src/optimizer/high_performance.md +++ b/docs/src/optimizer/high_performance.md @@ -1,31 +1,20 @@ # High-Performance Configuration -## LP Solver Selection +## Linear Programming Solver Selection -By default, EAGO uses GLPK for solving linear subproblems introduced. Using a -commercial linear solver is highly recommended such as Gurobi, CPLEX, or XPRESS -is highly recommended. Both Gurobi and CPLEX are free for academics and -installation information can be found on the [Gurobi website](http://www.gurobi.com/academia/academia-center) and the [IBM website](https://www.ibm.com/developerworks/community/blogs/jfp/entry/CPLEX_Is_Free_For_Students?lang=en), respectively. +By default, EAGO uses GLPK for solving linear subproblems introduced. Using a commercial linear programming (LP) solver is highly recommended, such as Gurobi, CPLEX, or XPRESS. Both Gurobi and CPLEX are free for academics and installation information can be found on the [Gurobi website](http://www.gurobi.com/academia/academia-center) and the [IBM website](https://www.ibm.com/developerworks/community/blogs/jfp/entry/CPLEX_Is_Free_For_Students?lang=en), respectively. A non-default LP solver can then be selected by the user via a series of keyword argument inputs as illustrated in the code snippet below. The `relaxed_optimizer` contains an instance optimizer with valid relaxations that are made at the root node and is updated with affine relaxations in place. ```julia # Create opt EAGO Optimizer with Gurobi as a lower subsolver -subsolver_config = SubSolvers(relaxed_optimizer = Gurobi.Optimizer(OutputFlag=0)) -eago_factory = () -> EAGO.Optimizer(subsolvers = subsolver_config) +eago_factory = () -> EAGO.Optimizer(SubSolvers(; r = Gurobi.Optimizer())) m = Model(eago_factory) ``` ## Rounding Mode -The [IntervalArithmetic.jl](https://github.com/JuliaIntervals/IntervalArithmetic.jl) package supports a number of different directed rounding -modes. The default directed rounding mode is `:tight`. It is recommended that the -user specify that `:accurate` directed rounding mode be used as it may results -in a significant performance improvement. Setting a rounding mode can requires -the redefinition of a number of functions. As a result, this should only be done -at the top-level by the user (rather than by using keyword arguments). To set the -rounding mode to `:accurate` using the following syntax when loading the EAGO package -initially: +The [IntervalArithmetic.jl](https://github.com/JuliaIntervals/IntervalArithmetic.jl) package supports a number of different directed rounding modes. The default directed rounding mode is `:tight`. It is recommended that the user specify that `:accurate` directed rounding mode be used as it may results in a significant performance improvement. Setting a rounding mode can requires the redefinition of a number of functions. As a result, this should only be done at the top-level by the user (rather than by using keyword arguments). To set the rounding mode to `:accurate` using the following syntax when loading the EAGO package initially: ```julia using IntervalArithmetic; setrounding(Interval, :accurate) @@ -35,11 +24,7 @@ using EAGO ## Ipopt Build -Ipopt is the recommended solver for upper bounding problems. Ipopt's performance is highly -dependent on the linear algebra package used (up to 30x). By default MUMPS is used. -It is recommended that you either compile Ipopt with HSL MA57 or the Pardiso linear -algebra packages with a machine specific Blas library (for Intel users the JuliaPro -MKL version is recommended). For information on this, see the below links: +Ipopt is the recommended solver for upper-bounding problems. Ipopt's performance is highly dependent on the linear algebra package used (up to 30x). By default MUMPS is used. It is recommended that you either compile Ipopt with HSL MA57 or the Pardiso linear algebra packages with a machine specific Blas library (for Intel users the JuliaPro MKL version is recommended). For information on this, see the links below: - [Compiling Ipopt](https://www.coin-or.org/Ipopt/documentation/node13.html) - Julia specifics: diff --git a/docs/src/optimizer/optimizer.md b/docs/src/optimizer/optimizer.md index 70bed03b..aa80ac4a 100644 --- a/docs/src/optimizer/optimizer.md +++ b/docs/src/optimizer/optimizer.md @@ -1,7 +1,6 @@ # EAGO Optimizer -The `EAGO.Optimizer` object holds all algorithm solution information. A description -of all user-facing options has been provided in the docstring. +The `Optimizer` object holds all algorithm solution information. A description of all user-facing options has been provided in the docstring. ## EAGO.Optimizer @@ -11,9 +10,7 @@ Optimizer ## EAGO Specific Functions and Operators -EAGO supports a number of functions and operators that for which specialized relaxation -routines are available. These can be registered and added to a JuMP model using the -function +EAGO supports a number of functions and operators that for which specialized relaxation routines are available. These can be registered and added to a JuMP model using the function: ```@docs EAGO.register_eago_operators!(m::JuMP.Model) @@ -47,14 +44,4 @@ EAGO.initial_parse!(m::Optimizer{R,S,T}) where {R,S,T} ## Extending EAGO -Functionality has been included that allows for extension's to EAGO's optimizer -to readily be defined. This can be done in two ways first defining a new structure -which is a subtype of `EAGO.ExtensionType` and overloading methods associated with -this new structure. An instance of this new structure is provided to the `EAGO.Optimizer` -using the `ext_type` keyword. This results in EAGO now dispatch to the new -methods rather than the generally defined methods for the parent type. For a complete -example, the reader is directed to the [interval bounding example](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_interval_bnb.ipynb) and [quasiconvex example](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/custom_quasiconvex.ipynb). Alternatively, the user can overload the `optimize_hook!` for -this subtype which will entirely circumvent the default global solution routine. Additional -information can be stored in the `ext` field of EAGO. In order to allow for compatibility -between packages the user is encouraged to append their extension name to the start of each -variable name (e.g. `newext_newdata`). +Functionality has been included that allows for extensions to EAGO's [`Optimizer`](@ref) to be readily defined. This can be done in two ways first defining a new structure which is a subtype of [`ExtensionType`](@ref) and overloading methods associated with this new structure. An instance of this new structure is provided to the [`Optimizer`](@ref) using the `ext_type` keyword. This results in EAGO now dispatch to the new methods rather than the generally defined methods for the parent type. For a complete example, the reader is directed to the [interval bounding example](@ref "Standard-Use Example 2") and the [quasiconvex example](@ref "Advanced-Use Example 1"). Alternatively, the user can overload the `optimize_hook!` for this subtype which will entirely circumvent the default global solution routine. Additional information can be stored in the `ext` field of the [`Optimizer`](@ref). In order to allow for compatibility between packages the user is encouraged to append their extension name to the start of each variable name (e.g. `newext_newdata`). diff --git a/docs/src/optimizer/relax_back.md b/docs/src/optimizer/relax_back.md index 9e6d5cba..f9f73e32 100644 --- a/docs/src/optimizer/relax_back.md +++ b/docs/src/optimizer/relax_back.md @@ -24,8 +24,7 @@ EAGO organizes information associated with each node in a given graph structure EAGO.initialize!(::AbstractCache, ::AbstractDirectedGraph) ``` -Information in a given `EAGO.AbstractCache` is populated by performing a series of forward and reverse passes of the graph structure which dispatch off of an -`EAGO.AbstractCacheAttribute` which indicates what particular information is desired. +Information in a given `EAGO.AbstractCache` is populated by performing a series of forward and reverse passes of the graph structure which dispatch off of an `EAGO.AbstractCacheAttribute` which indicates what particular information is desired. ```@docs EAGO.AbstractCacheAttribute @@ -58,9 +57,7 @@ The forward and reverse routines are overloaded as follows: EAGO.rprop!(t::AbstractCacheAttribute, v::Constant, g::AbstractDirectedGraph, c::AbstractCache, k::Int) ``` -Forward and reverse subroutines are overloaded for individual operators using through functions of the forms -`fprop!(t::AbstractCacheAttribute, v::Val{AtomType}, g::AbstractDirectedGraph, b::AbstractCache, k::Int)` and -`rprop!(t::AbstractCacheAttribute, v::Val{AtomType}, g::AbstractDirectedGraph, b::AbstractCache, k::Int)`. +Forward and reverse subroutines are overloaded for individual operators using through functions of the forms `fprop!(t::AbstractCacheAttribute, v::Val{AtomType}, g::AbstractDirectedGraph, b::AbstractCache, k::Int)` and `rprop!(t::AbstractCacheAttribute, v::Val{AtomType}, g::AbstractDirectedGraph, b::AbstractCache, k::Int)`. ## Other Routines diff --git a/docs/src/optimizer/udf_utilities.md b/docs/src/optimizer/udf_utilities.md index 54ad3234..5b0bc053 100644 --- a/docs/src/optimizer/udf_utilities.md +++ b/docs/src/optimizer/udf_utilities.md @@ -1,8 +1,6 @@ -# User-Defined Functions (UDFs) and Directed Acyclic Graph (DAG) Utilities +# User-Defined Functions and Directed Acyclic Graph Utilities -EAGO has included basic functionality to manipulate user-defined functions. -These features are largely experimental and we're interested in providing -additional for novel use cases. +EAGO has included basic functionality to manipulate user-defined functions (UDFs). EAGO also has utilities for directed acyclic graphs (DAGs). These features are largely experimental and we're interested in providing additional features for novel use cases. ## DAG Substitution and Flattening diff --git a/docs/src/quick_start/ex2.md b/docs/src/quick_start/ex2.md deleted file mode 100644 index 27a327e6..00000000 --- a/docs/src/quick_start/ex2.md +++ /dev/null @@ -1 +0,0 @@ -# Standard-Use Example 2 diff --git a/docs/src/quick_start/explicit_ann.md b/docs/src/quick_start/explicit_ann.md deleted file mode 100644 index 66313fed..00000000 --- a/docs/src/quick_start/explicit_ann.md +++ /dev/null @@ -1,92 +0,0 @@ -# Standard-Use Example 1 - -This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_explicit_ann.ipynb). - -### Solving an ANN to Optimality in EAGO - -In [[1](#references),[2](#references)], a surrogate artificial neural network (ANN) model of bioreactor productivity was constructed by fitting results from computationally expensive computational fluid dynamics (CFD) simulations. -The authors then optimized this surrogate model to obtain ideal processing conditions. The optimization problem is given by: - -```math -\begin{aligned} -\max_{\mathbf x \in X} B_{2} + \sum_{r = 1}^{3} W_{2,r} \frac{2}{1 + \exp (-2y_{r} + B_{1,r})} \;\; \text{where} \;\; y_{r} = \sum_{i = 1}^{8} W_{1,ir} x_{i} -\end{aligned} -``` - -## Input Parameters - -In the first block, we input parameters values supplied in the paper for ``W_1``, ``W_2``, -``B_1``, and ``B_2`` into Julia as simple array objects. We also input bounds for the variables -which are used to scale the values obtained from optimization from ``[-1, 1]`` back into the -design values. - -```julia -using JuMP, EAGO - -# Weights associated with the hidden layer -W1 = [ 0.54 -1.97 0.09 -2.14 1.01 -0.58 0.45 0.26; - -0.81 -0.74 0.63 -1.60 -0.56 -1.05 1.23 0.93; - -0.11 -0.38 -1.19 0.43 1.21 2.78 -0.06 0.40] - -# Weights associated with the output layer -W2 = [-0.91 0.11 0.52] - -# Bias associated with the hidden layer -B1 = [-2.698 0.012 2.926] - -# Bias associated with the output layer -B2 = -0.46 - -# Variable bounds (Used to scale variables after optimization) -xLBD = [0.623, 0.093, 0.259, 6.56, 1114, 0.013, 0.127, 0.004] -xUBD = [5.89, 0.5, 1.0, 90, 25000, 0.149, 0.889, 0.049]; -``` - -## Construct the JuMP Model and Optimize - -We now formulate the problem using standard JuMP [[3](#references)] syntax and optimize it. Note that -we are forming an NLexpression object to handle the summation term to keep the code -visually simple, but this could be placed directly in the JuMP `@NLobjective` expression -instead. - -```julia -# Model construction -model = Model(optimizer_with_attributes(EAGO.Optimizer, "absolute_tolerance" => 0.001)) -@variable(model, -1.0 <= x[i=1:8] <= 1.0) -@NLexpression(model, y[r=1:3], sum(W1[r,i]*x[i] for i in 1:8)) -@NLobjective(model, Max, B2 + sum(W2[r]*(2/(1+exp(-2*y[r]+B1[r]))) for r=1:3)) - -# Solve the model -optimize!(model) -``` - -## Retrieve Results - -We then recover the objective value, the solution value, and termination status codes -using standard JuMP syntax. The optimal value and solution values are then rescaled -using the variable bounds to obtain their physical interpretations. - -```julia -# Access calculated values -fval = JuMP.objective_value(model) -xsol = JuMP.value.(x) -status_term = JuMP.termination_status(model) -status_prim = JuMP.primal_status(model) -println("EAGO terminated with a status of $status_term and a result code of $status_prim.") -println("The optimal value is: $(round(fval,digits=5)).") -println("The solution found is $(round.(xsol,digits=3)).") -println("") - -# Rescale values back to physical space -rescaled_fval = ((fval+1)/2)*0.07 -rescaled_xsol = ((xsol.+1.0)./2).*(xUBD-xLBD).+xLBD -println("Rescaled optimal value and solution values:") -println("The rescaled optimal value is: $(round(rescaled_fval,digits=4))") -println("The rescaled solution is $(round.(rescaled_xsol,digits=3)).") -``` - -## References - -1. J. D. Smith, A. A. Neto, S. Cremaschi, and D. W. Crunkleton, CFD-based optimization of a flooded bed algae bioreactor, *Industrial & Engineering Chemistry Research*, 52 (2012), pp. 7181–7188. -2. A. M. Schweidtmann and A. Mitsos. Global Deterministic Optimization with Artificial Neural Networks Embedded [https://arxiv.org/pdf/1801.07114.pdf](https://arxiv.org/pdf/1801.07114.pdf). -3. Iain Dunning and Joey Huchette and Miles Lubin. JuMP: A Modeling Language for Mathematical Optimization, *SIAM Review*, 59 (2017), pp. 295-320. diff --git a/docs/src/quick_start/guidelines.md b/docs/src/quick_start/guidelines.md deleted file mode 100644 index 91ae92f0..00000000 --- a/docs/src/quick_start/guidelines.md +++ /dev/null @@ -1,196 +0,0 @@ -# Customization Guidelines - -This section contains general guidelines on how the functionality of EAGO can be extended -for specific use cases. Many functions in EAGO are extensible, so examples are not provided -for every possible case, but some important functions are covered. If there is a use case -you do not see provided here, but would like to see, please contact -[Robert Gottlieb](https://psor.uconn.edu/person/robert-gottlieb/). - -## 1) Creating an Extension - -Extensibility in EAGO is based on the `EAGO.ExtensionType` type. To create customized -functions, the recommended method is to create a new structure, as follows: - -```julia -using EAGO - -struct MyNewStruct <: EAGO.ExtensionType end -``` - -To let EAGO know that you would like to use this extension (and any functions you -overload), when you create the JuMP model, declare your new type in the SubSolvers -field of EAGO's optimizer as follows: - -```julia -using JuMP - -factory = () -> EAGO.Optimizer(SubSolvers(; t = MyNewStruct() )) -model = Model(factory) -``` - -The key point to note here is that the new structure is set to `SubSolvers.t`, which -is the field that holds any user-defined extension type. Now, when EAGO calls any of -its functions, it will check to see if a custom function for this new extension has -been created. If so, it will use the new one; if not, it will use the default version -of that function. - -## 2) Preprocessing - -In EAGO's branch-and-bound routine, preprocessing is performed prior to solving the -lower and upper problems. By default, linear and quadratic contractor methods are -performed, followed by interval constraint propagation, then optimization-based -bounds tightening. An important outcome of EAGO's default preprocessing is that -the `_preprocess_feasibility` field of the `EAGO.GlobalOptimizer` is set to `true`, -unless the subproblem at the current node has been proven infeasible. Therefore, -to bypass preprocessing for your own problem, you must, at a minimum, set this -field to `true`. For example: - -```julia -import EAGO: preprocess! -function EAGO.preprocess!(t::MyNewStruct, x::EAGO.GlobalOptimizer) - x._preprocess_feasibility = true -end -``` - -The user-defined preprocessing step can be as simple or complex as desired, but -if `_preprocess_feasibility` is not set to `true`, EAGO will assume each node -is infeasible. - -## 3) Lower Problem - -By default, EAGO applies Kelley's cutting-plane algorithm [[1](#references)] to solve the lower bounding -problem. This can be overloaded using the same syntax as for the other functions. -Necessary changes to the `EAGO.GlobalOptimizer` that occur within the lower problem -are changing the `_lower_objective_value` and `_lower_feasibility` fields. If the -`_lower_objective_value` field is not changed, branch-and-bound will not update -the lower bound. If `_lower_feasibility` is not set to `true`, the node will be -discarded as infeasible. A minimum functional (though not useful) lower problem -extension is as follows: - -```julia -import EAGO: lower_problem! -function EAGO.lower_problem!(t::MyNewStruct, x::EAGO.GlobalOptimizer) - x._lower_objective_value = -Inf - x._lower_feasibility = true -end -``` - -Any arbitrarily complex lower problem can be substituted here in place of EAGO's -default method, as long as these fields are updated. Note also that, although there -is a separate upper problem function, if the lower problem is being replaced by -an algorithm that also calculates an upper objective value, the necessary fields -to update in `upper_problem!` can simply be updated here, and the `upper_problem!` -can be overloaded by a function that does `nothing`. - -## 4) Upper Problem - -By default, the upper bounding problem is run on every node up to depth -`upper_bounding_depth`, and is triggered with a probability of `0.5^(depth - upper_bounding_depth)` -afterwards for continuous problems. For integer problems, this approach -is used in addition to running on every node up to depth `upper_bounding_depth + cont_depth`, -with another trigger of probability `0.5^(depth - upper_bounding_depth - cont_depth)`. -The `upper_bounding_depth` and `cont_depth` are fields of `GlobalOptimizer._parameters` -and can be changed from their default values when the JuMP model is created, or -at any time afterwards. If any of these trigger conditions are met, the -default EAGO upper problem runs a local NLP solve. - -The important fields to update in the `upper_problem!` are the `_upper_objective_value` -and `_upper_feasibility` fields. If `_upper_objective_value` is not updated, -the upper bound in the branch-and-bound algorithm will not update and the problem -will not converge. If `_upper_feasibility` is not set to true, then any changes -made to `_upper_objective_value` will not be updated for each node and the problem -will not converge. A minimum functional (though not useful) upper problem -extension is as follows: - -```julia -import EAGO: upper_problem! -function upper_problem!(t::MyNewStruct, x::EAGO.GlobalOptimizer) - x._upper_objective_value = Inf - x._upper_feasibility = true -end -``` - -This example upper problem will set the upper bound on the objective value to -`Inf` for each node. Given that no useful information is provided here, we could -also have set `x_upper_feasibility` to `false` (or equivalently, remove the line -where we set it to `true`), and the global upper bound would never be updated. - -Note that if the `_upper_objective_value` is changed elsewhere, such as in the -definition for the `lower_problem!`, the `_upper_feasibility` flag must be set -to `true`. If this is not done, the change to the `_upper_objective_value` will -be discarded. - -## 5) Convergence Check - -By default, EAGO checks to see if the lower and upper bounds have converged to -within either the absolute or relative tolerance. This method of checking convergence -may not be desired if, for example, only the absolute tolerance is relevant, and you -want to ensure that the program does not end prematurely due to a relative tolerance -limit being reached. The fields to check are `_lower_objective_value` and -`_upper_objective_value` for the best-identified global lower and upper bounds, -respectively. This function should return `true` if the lower and upper bounds have -converged, and `false` otherwise. An example of how to specify that the convergence -check only use the absolute tolerance is as follows: - -```julia -import EAGO: convergence_check -function EAGO.convergence_check(t::MyNewStruct, x::EAGO.GlobalOptimizer) - gap = (x._upper_objective_value - x._lower_objective_value) - return (gap <= x._parameters.absolute_tolerance) -end -``` - -## 6) Postprocessing - -Postprocessing is the final step before a node is branched on. By default, EAGO -performs duality-based bounds tightening[2] up to an iteration limit set by -`dbbt_depth` in the `GlobalOptimizer._parameters` field. The important field -to update in postprocessing is `_postprocess_feasibility`, which must be set -to `true` for EAGO to branch on any given node. The minimum working postprocessing -function is therefore: - -```julia -import EAGO: postprocess! -function EAGO.postprocess!(t::MyNewStruct, x::EAGO.GlobalOptimizer) - x._postprocess_feasibility = true -end -``` - -If `_postprocess_feasibility` is not set to `true`, no nodes will be branched on. - -## 7) Termination Check - -This is the check that occurs on each iteration of the branch-and-bound algorithm -that determines whether the algorithm continues or not. By default, several -conditions are checked for such as the satisfaction of absolute or relative -tolerances, solution infeasibility, or other specified limits. This function -returns `true` if any of the stopping conditions have been met, and -branch-and-bound should stop, and `false` otherwise. The important fields to -update are `_end_state`, which takes values of `EAGO.GlobalEndState`, -`_termination_status_code`, which takes values of `MathOptInterface.TerminationStatusCode`, -and `_result_status_code`, which takes values of `MathOptInterface.ResultStatusCode`. -Combined, these fields provide information about the branch-and-bound completion -status and result feasibility. - -As an example, suppose we have updated the `convergence_check` function to -only check for absolute tolerance, and based on our knowledge of the problem, -this is the only condition we care about, and we know the solution will be -optimal. We could then overload the `termination_check` function as follows: - -```julia -import EAGO: termination_check -function EAGO.termination_check(t::MyNewStruct, x::EAGO.GlobalOptimizer) - flag = EAGO.convergence_check(t, x) - if flag - x._end_state = EAGO.GS_OPTIMAL - x._termination_status_code = MathOptInterface.OPTIMAL - x._result_status_code = MathOptInterface.FEASIBLE_POINT - end - return flag -end -``` - -## References - -1. Kelley, J. E. “The Cutting-Plane Method for Solving Convex Programs.” *Journal of the Society for Industrial and Applied Mathematics*, vol. 8, no. 4, pp. 703–12 (1960). -2. Tawarmalani, M., Sahinidis, N. V. "Global optimization of mixed-integer nonlinear programs: A theoretical and computational study." *Math. Program., Ser. A*, 99, pp. 563-591 (2004). diff --git a/docs/src/quick_start/qc_Equation_1.png b/docs/src/quick_start/qc_Equation_1.png deleted file mode 100644 index 48aef029..00000000 Binary files a/docs/src/quick_start/qc_Equation_1.png and /dev/null differ diff --git a/docs/src/quick_start/qc_Equation_2.png b/docs/src/quick_start/qc_Equation_2.png deleted file mode 100644 index ddad2085..00000000 Binary files a/docs/src/quick_start/qc_Equation_2.png and /dev/null differ diff --git a/docs/src/quick_start/qc_Equation_3.png b/docs/src/quick_start/qc_Equation_3.png deleted file mode 100644 index 3bdb7d91..00000000 Binary files a/docs/src/quick_start/qc_Equation_3.png and /dev/null differ diff --git a/docs/src/quick_start/qs_landing.md b/docs/src/quick_start/qs_landing.md index 20512079..0c0c2d2f 100644 --- a/docs/src/quick_start/qs_landing.md +++ b/docs/src/quick_start/qs_landing.md @@ -1,24 +1,15 @@ # Quick Start -EAGO is a global optimizer primarily meant to be used with the JuMP algebraic modeling -language. Typical use will involve installing EAGO and JuMP, creating a problem using -JuMP syntax, and passing the problem to the EAGO optimizer. +EAGO is a global optimizer primarily meant to be used with the JuMP algebraic modeling language. Typical use will involve installing EAGO and JuMP, creating a problem using JuMP syntax, and passing the problem to the [`Optimizer`](@ref). ## Customization -EAGO is designed to be easily extensible. Some of the examples that follow include use -cases where the standard EAGO functionality is overloaded and readily incorporated into -the main optimization routine. Information on how to extend the main branch-and-bound -functions (including lower and upper bounding routines) can be found in the -[Customization Guidelines](https://psorlab.github.io/EAGO.jl/dev/quick_start/guidelines/) section. +EAGO is designed to be easily extensible. Some of the examples that follow include use cases where the standard EAGO functionality is overloaded and readily incorporated into the main optimization routine. Information on how to extend the main branch-and-bound functions (including lower and upper-bounding routines) can be found in the [Customization Guidelines](@ref) section. ## Examples -The following pages in this section include several representative examples of how EAGO -can be used. Additional (and in some cases, shortened) examples can be found in the -[EAGO-notebooks repository](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks). -Examples and instructional pages in this section include: +The following pages in this section include several representative examples of how EAGO can be used. Additional (and in some cases, shortened) examples can be found in the [EAGO-notebooks repository](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks). Examples and instructional pages in this section include: - [Standard-Use Example 1](@ref): A base-case optimization problem solved using the EAGO optimizer. No extensions or function overloading required. -- [Standard-Use Example 2](@ref): user-defined functions (TODO) -- [Advanced-Use Example 1](@ref): A quasiconvex optimization problem solved by overloading some of EAGO's functionality to implement a bisection-based algorithm instead of typical branch-and-bound. (TODO, but see the [Jupyter Notebook version](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/custom_quasiconvex.ipynb)) +- [Standard-Use Example 2](@ref): An interval bounding example where the user redefines EAGO's lower and upper-bounding functions. +- [Advanced-Use Example 1](@ref): A quasiconvex optimization problem solved by overloading some of EAGO's functionality to implement a bisection-based algorithm instead of typical branch-and-bound. - [Advanced-Use Example 2](@ref): Overloading the branch-and-bound algorithm with a custom extension type. diff --git a/docs/src/quick_start/quasiconvex.md b/docs/src/quick_start/quasiconvex.md deleted file mode 100644 index 49080c2e..00000000 --- a/docs/src/quick_start/quasiconvex.md +++ /dev/null @@ -1,105 +0,0 @@ -# Advanced-Use Example 1 - -This example still needs to be updated. -This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/custom_quasiconvex.ipynb). - -### Customizing EAGO to Solve a Quasiconvex Problem - -In this example, we'll adapt EAGO to implement the bisection-based algorithm used to solve -a quasiconvex optimization problem presented in [[1](#references)]: - -![Equation 1](qc_Equation_1.png) - -where: - -![Equation 2](qc_Equation_2.png) - -Interval analysis shows that the objective value is bounded by the interval **F** such that -$f^*∈ F = [f^L, f^U] = [-5, 0]$. Introducing an auxiliary variable $t∈ T = F$ allows the -problem to be formulated as: - -![Equation 3](qc_Equation_3.png) - -Let $ϕ_τ(y) = f(y) - τ$ such that $\tau = (t^L + t^U)/2$. We solve for $y$ subject to -constraints (24)-(27) where $ϕ_τ (y) ≤ 0$. If this is feasible, $t^*∈ [t^L,τ]$, else -$t^*∈ [τ, t^U]$. The interval containing $t^*$ is kept and the other is fathomed. This -manner of bisection is repeated until an interval containing a feasible solution with a -width of at most ϵ is located [[2](#references)]. - -## EAGO Implementation - -In the first block, we input parameters values supplied in the paper for $W_1$, $W_2$, -$B_1$, and $B_2$ into Julia as simple array objects. We also input bounds for the variables -which are used to scale the values obtained from optimization from [-1, 1] back into the -design values. - -```julia -using JuMP, EAGO - -# Weights associated with the hidden layer -W1 = [ 0.54 -1.97 0.09 -2.14 1.01 -0.58 0.45 0.26; - -0.81 -0.74 0.63 -1.60 -0.56 -1.05 1.23 0.93; - -0.11 -0.38 -1.19 0.43 1.21 2.78 -0.06 0.40] - -# Weights associated with the output layer -W2 = [-0.91 0.11 0.52] - -# Bias associated with the hidden layer -B1 = [-2.698 0.012 2.926] - -# Bias associated with the output layer -B2 = -0.46 - -# Variable bounds (Used to scale variables after optimization) -xLBD = [0.623, 0.093, 0.259, 6.56, 1114, 0.013, 0.127, 0.004] -xUBD = [5.89, 0.5, 1.0, 90, 25000, 0.149, 0.889, 0.049]; -``` - -## Construct the JuMP Model and Optimize - -We now formulate the problem using standard JuMP [[3](#references)] syntax and optimize it. Note that -we are forming an NLexpression object to handle the summation term to keep the code -visually simple, but this could be placed directly in the JuMP `@NLobjective` expression -instead. - -```julia -# Model construction -model = Model(optimizer_with_attributes(EAGO.Optimizer, "absolute_tolerance" => 0.001)) -@variable(model, -1.0 <= x[i=1:8] <= 1.0) -@NLexpression(model, y[r=1:3], sum(W1[r,i]*x[i] for i in 1:8)) -@NLobjective(model, Max, B2 + sum(W2[r]*(2/(1+exp(-2*y[r]+B1[r]))) for r=1:3)) - -# Solve the model -optimize!(model) -``` - -## Retrieve Results - -We then recover the objective value, the solution value, and termination status codes -using standard JuMP syntax. The optimal value and solution values are then rescaled -using the variable bounds to obtain their physical interpretations. - -```julia -# Access calculated values -fval = JuMP.objective_value(model) -xsol = JuMP.value.(x) -status_term = JuMP.termination_status(model) -status_prim = JuMP.primal_status(model) -println("EAGO terminated with a status of $status_term and a result code of $status_prim.") -println("The optimal value is: $(round(fval,digits=5)).") -println("The solution found is $(round.(xsol,digits=3)).") -println("") - -# Rescale values back to physical space -rescaled_fval = ((fval+1)/2)*0.07 -rescaled_xsol = ((xsol.+1.0)./2).*(xUBD-xLBD).+xLBD -println("Rescaled optimal value and solution values:") -println("The rescaled optimal value is: $(round(rescaled_fval,digits=4))") -println("The rescaled solution is $(round.(rescaled_xsol,digits=3)).") -``` - -## References - -1. C. Jansson, Quasiconvex relaxations based on interval arithmetic, Linear Algebra and its Applications, 324 (2001), pp. 27–53. -2. S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004. -3. Iain Dunning and Joey Huchette and Miles Lubin. JuMP: A Modeling Language for Mathematical Optimization, *SIAM Review*, 59 (2017), pp. 295-320. diff --git a/docs/src/semiinfinite/SIPProbFormulation.png b/docs/src/semiinfinite/SIPProbFormulation.png deleted file mode 100644 index 95419e4d..00000000 Binary files a/docs/src/semiinfinite/SIPProbFormulation.png and /dev/null differ diff --git a/docs/src/semiinfinite/SIPformulation.png b/docs/src/semiinfinite/SIPformulation.png deleted file mode 100644 index 8c9256fe..00000000 Binary files a/docs/src/semiinfinite/SIPformulation.png and /dev/null differ diff --git a/docs/src/semiinfinite/semiinfinite.md b/docs/src/semiinfinite/semiinfinite.md index 7f05822c..50c19f6c 100644 --- a/docs/src/semiinfinite/semiinfinite.md +++ b/docs/src/semiinfinite/semiinfinite.md @@ -1,32 +1,44 @@ # Solving Semi-Infinite Programs -## Using EAGO to Solve a Semi-Infinite Program (SIP) +## Using EAGO to Solve a Semi-Infinite Program -Semi-infinite programming remains an active area of research. In general, the solutions of SIPs with nonconvex semi-infinite constraints of the following form are extremely challenging: +This example is also provided [here as a Jupyter Notebook](https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/sip_explicit_solve.ipynb). -![SipProbForm](SIPProbFormulation.png) +Semi-infinite programming remains an active area of research. In general, the solutions of semi-infinite programs (SIPs) with nonconvex semi-infinite constraints of the following form are extremely challenging: -EAGO implements three different algorithms detailed in [[1](#references),[2](#references)] to determine a globally optimal solution to problems of the above form. This is accomplished using the `sip_solve` function which returns the optimal value, the solution, and a boolean feasibility flag. To illustrate the use of this function, a simple example is presented here which solves the problem: +```math +\begin{aligned} +f^{*} = & \min_{\mathbf x \in X} f(\mathbf x) \\ +{\rm s.t.} \; \; & g(\mathbf x, \mathbf p) \leq 0, \; \; \; \; \forall \mathbf p \in P \\ +& \mathbf x \in X = \{ \mathbf x \in \mathbb R^{n_{x}} : \mathbf x^{L} \leq \mathbf x \leq \mathbf x^{U} \} \\ +& P = \{ \mathbf p \in \mathbb R^{n_{p}} : \mathbf p^{L} \leq \mathbf p \leq \mathbf p^{U} \} +\end{aligned} +``` + +EAGO implements three different algorithms detailed in [[1](#References), [2](#References)] to determine a globally optimal solution to problems of the above form. This is accomplished using the [`sip_solve`](@ref) function which returns the optimal value, the solution, and a boolean feasibility flag. To illustrate the use of this function, a simple example is presented here which solves the problem: -![SipForm](SIPformulation.png) +```math +\begin{aligned} +f(\mathbf x) & = \frac{1}{3} x_{1}^{2} + x_{2}^{2} + \frac{x_{1}}{2} \\ +g(\mathbf x, p) & = (1 - x_{1}^{2} p^{2})^{2} - x_{1} p^{2} - x_{2}^{2} + x_{2} \leq 0 \\ +& \mathbf x \in X = [-1000, 1000]^{2} \\ +& p \in P = [0, 1] +\end{aligned} +``` ```julia using EAGO, JuMP # Define semi-infinite program f(x) = (1/3)*x[1]^2 + x[2]^2 + x[1]/2 -gSIP(x,p) = (1.0 - (x[1]^2)*(p[1]^2))^2 - x[1]*p[1]^2 - x[2]^2 + x[2] +gSIP(x, p) = (1.0 - x[1]^2*p[1]^2)^2 - x[1]*p[1]^2 - x[2]^2 + x[2] x_l = Float64[-1000.0, -1000.0] x_u = Float64[1000.0, 1000.0] p_l = Float64[0.0] p_u = Float64[1.0] -sip_result = sip_solve(SIPRes(), x_l, x_u, p_l, p_u, f, Any[gSIP], abs_tolerance = 1E-3) - -println("The global minimum of the semi-infinite program is between: $(sip_result.lower_bound) and $(sip_result.upper_bound).") -println("The global minimum is attained at: x = $(sip_result.xsol).") -println("Is the problem feasible? $(sip_result.feasibility).") +sip_result = sip_solve(SIPRes(), x_l, x_u, p_l, p_u, f, Any[gSIP], res_sip_absolute_tolerance = 1E-3); ``` ## Semi-Infinite Solver diff --git a/images/OptForm.svg b/images/OptForm.svg deleted file mode 100644 index a49de3a9..00000000 --- a/images/OptForm.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/images/ProcessFormulation.svg b/images/ProcessFormulation.svg deleted file mode 100644 index 6f78380f..00000000 --- a/images/ProcessFormulation.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/images/math-code.js b/images/math-code.js deleted file mode 100644 index a4358069..00000000 --- a/images/math-code.js +++ /dev/null @@ -1,20 +0,0 @@ -(function() { - var i, text, code, codes = document.getElementsByTagName('code'); - for (i = 0; i < codes.length;) { - code = codes[i]; - if (code.parentNode.tagName !== 'PRE' && code.childElementCount === 0) { - text = code.textContent; - if (/^\$[^$]/.test(text) && /[^$]\$$/.test(text)) { - text = text.replace(/^\$/, '\\(').replace(/\$$/, '\\)'); - code.textContent = text; - } - if (/^\\\((.|\s)+\\\)$/.test(text) || /^\\\[(.|\s)+\\\]$/.test(text) || - /^\$(.|\s)+\$$/.test(text) || - /^\\begin\{([^}]+)\}(.|\s)+\\end\{[^}]+\}$/.test(text)) { - code.outerHTML = code.innerHTML; // remove - continue; - } - } - i++; - } -})(); diff --git a/src/eago_optimizer/moi_wrapper.jl b/src/eago_optimizer/moi_wrapper.jl index 8ddc1ed1..1b6301fd 100644 --- a/src/eago_optimizer/moi_wrapper.jl +++ b/src/eago_optimizer/moi_wrapper.jl @@ -192,7 +192,7 @@ MOI.get(m::Optimizer, ::MOI.TerminationStatus) = m._termination_status_code MOI.get(m::Optimizer, ::MOI.SolveTimeSec) = m._run_time MOI.get(m::Optimizer, ::MOI.NodeCount) = m._node_count MOI.get(m::Optimizer, ::MOI.ResultCount) = (m._result_status_code === MOI.FEASIBLE_POINT) ? 1 : 0 -MOI.get(m::Optimizer, ::MOI.TimeLimitSec) = m._parameters.time_limit +MOI.get(m::Optimizer, ::MOI.TimeLimitSec) = isinf(m._parameters.time_limit) ? nothing : m._parameters.time_limit MOI.get(m::Optimizer, ::MOI.Silent) = m._parameters.verbosity == 0 MOI.get(m::Optimizer, ::MOI.ListOfVariableIndices) = [VI(i) for i = 1:m._input_problem._variable_count] diff --git a/src/eago_optimizer/types/extension.jl b/src/eago_optimizer/types/extension.jl index 7c2b6088..7ed83544 100644 --- a/src/eago_optimizer/types/extension.jl +++ b/src/eago_optimizer/types/extension.jl @@ -3,7 +3,7 @@ An abstract type the subtypes of which are associated with functions method overloaded for new extensions. An instance of this is the `DefaultExt <: ExtensionType` -structure in the EAGO `Optimizer` in the `ext_type` field. +structure in the `ext_type` field of the [`Optimizer`](@ref). """ abstract type ExtensionType end struct DefaultExt <: ExtensionType end diff --git a/src/eago_optimizer/types/global_optimizer.jl b/src/eago_optimizer/types/global_optimizer.jl index e8130f71..6cdcbf3f 100644 --- a/src/eago_optimizer/types/global_optimizer.jl +++ b/src/eago_optimizer/types/global_optimizer.jl @@ -48,7 +48,7 @@ $(TYPEDEF) An Enum of possible values for EAGO's termination status. This attribute is used by EAGO to explain why the optimizer stopped executing in the most recent call -to `optimize!`. See also [`MathOptInterface.TerminationStatusCode`](@ref). +to `optimize!`. See also [`MathOptInterface.TerminationStatusCode`](https://jump.dev/MathOptInterface.jl/stable/reference/models/#MathOptInterface.TerminationStatusCode). If no call has been made to `optimize!`, the `GlobalEndState` value is: - `GS_UNSET`: The optimization algorithm has not stated. @@ -585,7 +585,7 @@ Base.@kwdef mutable struct GlobalOptimizer{Q,S,T<:ExtensionType} <: MOI.Abstract _auxiliary_variable_info::Union{Nothing,_AuxVarData} = nothing "Variables to perform OBBT on (default: all variables in nonlinear expressions)" obbt_variable_values::Vector{Bool} = Bool[] - "Specifies that the optimize_hook! function should be called rather than throw the + "Specifies that the [`optimize_hook!`](@ref) function should be called rather than throw the problem to the standard routine" enable_optimize_hook::Bool = false "(Deprecated, use _subsolvers instead) Storage for custom extension types"