Skip to content

Commit

Permalink
Update docstrings
Browse files Browse the repository at this point in the history
  • Loading branch information
ziatdinovmax committed Jul 31, 2023
1 parent 102730c commit 63bcebd
Showing 1 changed file with 20 additions and 8 deletions.
28 changes: 20 additions & 8 deletions gpax/acquisition/acquisition.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ def EI(rng_key: jnp.ndarray, model: Type[ExactGP],
- 'delta':
The infinite penalty is applied to the recently visited points.
- 'inverse_distance':
Modifies the acquisition function by penalizing points near the recent points.
Modifies the acquisition function by penalizing points near the recent points.
For the 'inverse_distance', the acqusition function is penalized as:
Expand Down Expand Up @@ -123,10 +123,16 @@ def UCB(rng_key: jnp.ndarray, model: Type[ExactGP],
to follow the same distribution as the training data. Hence, since we introduce a model noise
for the training data, we also want to include that noise in our prediction.
penalty:
Penalty applied to the acqusition function to discourage re-evaluation
Penalty applied to the acquisition function to discourage re-evaluation
at or near points that were recently evaluated. Options are:
- 'delta': the infinite penalty is applied to the recently visited points
- 'inverse_distance': Modifies the acquisition function by penalizing points near the recent points as
- 'delta':
The infinite penalty is applied to the recently visited points.
- 'inverse_distance':
Modifies the acquisition function by penalizing points near the recent points.
For the 'inverse_distance', the acqusition function is penalized as:
.. math::
\alpha - \lambda \cdot \pi(X, r)
Expand Down Expand Up @@ -191,10 +197,16 @@ def UE(rng_key: jnp.ndarray,
to follow the same distribution as the training data. Hence, since we introduce a model noise
for the training data, we also want to include that noise in our prediction.
penalty:
Penalty applied to the acqusition function to discourage re-evaluation
Penalty applied to the acquisition function to discourage re-evaluation
at or near points that were recently evaluated. Options are:
- 'delta': the infinite penalty is applied to the recently visited points
- 'inverse_distance': Modifies the acquisition function by penalizing points near the recent points as
- 'delta':
The infinite penalty is applied to the recently visited points.
- 'inverse_distance':
Modifies the acquisition function by penalizing points near the recent points.
For the 'inverse_distance', the acqusition function is penalized as:
.. math::
\alpha - \lambda \cdot \pi(X, r)
Expand Down

0 comments on commit 63bcebd

Please sign in to comment.