Skip to content

Commit

Permalink
Merge branch 'master' into gradients-for-covariance-gpy
Browse files Browse the repository at this point in the history
  • Loading branch information
apaleyes authored Mar 9, 2021
2 parents bda97ad + b793989 commit 6738faa
Show file tree
Hide file tree
Showing 9 changed files with 841 additions and 10 deletions.
14 changes: 7 additions & 7 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7]
python-version: [3.7, 3.8]

steps:
- uses: actions/checkout@v2
Expand All @@ -30,12 +30,12 @@ jobs:
# work around issues with GPy setting matplotlib backend
echo 'backend: Agg' > matplotlibrc
pip install .
# - name: Lint with flake8
# run: |
# # stop the build if there are Python syntax errors or undefined names
# flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Unit tests
run: |
python -m pytest
Expand Down
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
# Changelog
All notable changes to Emukit will be documented in this file.

## [0.4.8]
- Added sobol initial design
- BanditParameter
- Boolean operations for stopping conditions
- Preferential Bayesian optimization example
- MUMBO acquisition function
- Revised dependecies' versions requirements
- Bug fixes
- Doc fixes

## [0.4.7]
- Added simple GP model for examples
- Bayesian optimization with unknown constraints
Expand Down
2 changes: 1 addition & 1 deletion emukit/__version__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,4 @@
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
# ==============================================================================
__version__ = '0.4.7' # noqa
__version__ = '0.4.8' # noqa
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .dnlcb import DynamicNegativeLowerConfidenceBound
31 changes: 31 additions & 0 deletions emukit/examples/dynamic_negative_lower_confidence_bound/dnlcb.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
from emukit.bayesian_optimization.acquisitions import NegativeLowerConfidenceBound
from emukit.core.interfaces import IModel, IDifferentiable
import numpy as np
from typing import Union

class DynamicNegativeLowerConfidenceBound(NegativeLowerConfidenceBound):

def __init__(self, model: Union[IModel, IDifferentiable], input_space_size: int, delta: float) -> None:
"""
Dynamic extension of the LCB acquisition. The beta coefficient is updated at each iteration, based on the explorativeness parameter delta which is inversely
proportional to beta itself - the higher the delta the less explorative the selection will be.
Please consider that regret bounds based on the dynamic exploration coefficient only hold for selected kernel classes exhibiting boundedness and smoothness.
See the base class for paper references.
This class may also be taken as a reference for acquisition functions that dynamically update their parameters thanks to the update_parameters() hook; the implicit assumption is that this method is invoked once per iteration (it is no big deal if this is the case for a constant number of times per iteration; should it be more then we are increasing the beta too fast).
:param model: The underlying model that provides the predictive mean and variance for the given test points
:param input_space_size: the size of the finite D grid on which the function is evaluated
:param delta: the exploration parameter determining the beta exploration coefficient; delta must be in (0, 1) and it is inversely related to beta
"""
assert input_space_size > 0, "Invalid dimension provided"
assert 0 < delta < 1, "Delta must be in (0, 1)"
super().__init__(model)
self.input_space_size = input_space_size
self.delta = delta
self.iteration = 0

def optimal_beta_selection(self) -> float:
return 2 * np.log(self.input_space_size * (self.iteration ** 2) * (np.pi ** 2) / (6 * self.delta))

def update_parameters(self) -> None:
self.iteration += 1
self.beta = self.optimal_beta_selection()

Large diffs are not rendered by default.

9 changes: 8 additions & 1 deletion emukit/model_wrappers/gpy_model_wrappers.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,14 @@ def generate_hyperparameters_samples(self, n_samples=20, n_burnin=100, subsample
"""
self.model.optimize(max_iters=self.n_restarts)
self.model.param_array[:] = self.model.param_array * (1. + np.random.randn(self.model.param_array.size) * 0.01)
# Add jitter to all unfixed parameters. After optimizing the hyperparameters, the gradient of the
# posterior probability of the parameters wrt. the parameters will be close to 0.0, which is a poor
# initialization for HMC
unfixed_params = [param for param in self.model.flattened_parameters if not param.is_fixed]
for param in unfixed_params:
# Add jitter by multiplying with log-normal noise with mean 1 and standard deviation 0.01
# This ensures the sign of the parameter remains the same
param *= np.random.lognormal(np.log(1. / np.sqrt(1.0001)), np.sqrt(np.log(1.0001)), size=param.size)
hmc = GPy.inference.mcmc.HMC(self.model, stepsize=step_size)
samples = hmc.sample(num_samples=n_burnin + n_samples * subsample_interval, hmc_iters=leapfrog_steps)
hmc_samples = samples[n_burnin::subsample_interval]
Expand Down
486 changes: 486 additions & 0 deletions notebooks/Emukit-tutorial-parallel-eval-of-obj-fun.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion notebooks/index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,8 @@
"* [How to perform Bayesian optimization with non-linear constraints](Emukit-tutorial-constrained-optimization.ipynb)\n",
"* [Bayesian optimization integrating the hyper-parameters of the model](Emukit-tutorial-bayesian-optimization-integrating-model-hyperparameters.ipynb)\n",
"* [How to use custom model](Emukit-tutorial-custom-model.ipynb)\n",
"* [How to select neural network hyperparameters](Emukit-tutorial-select-neural-net-hyperparameters.ipynb)"
"* [How to select neural network hyperparameters](Emukit-tutorial-select-neural-net-hyperparameters.ipynb)\n",
"* [How to parallelize external objective function evaluations in Bayesian optimization](Emukit-tutorial-parallel-eval-of-obj-fun.ipynb)"
]
},
{
Expand Down

0 comments on commit 6738faa

Please sign in to comment.