-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SparseFiniteGP type and associated functionality #136
Add SparseFiniteGP type and associated functionality #136
Conversation
- SparseFiniteGP type, constructors, basic methods - Convenience methods for inference using sparse GP - Basic tests
Codecov Report
@@ Coverage Diff @@
## master #136 +/- ##
==========================================
+ Coverage 82.37% 82.43% +0.06%
==========================================
Files 26 27 +1
Lines 749 763 +14
==========================================
+ Hits 617 629 +12
- Misses 132 134 +2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good, thanks very much for the contribution -- I'm happy with the overall design, except perhaps one of the constructors that I've left a specific note on.
All other comments are essentially style-related.
Co-authored-by: willtebbutt <[email protected]>
Co-authored-by: willtebbutt <[email protected]>
Co-authored-by: willtebbutt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great -- I think we're nearly there now.
Just some docs needed now, along with a couple of other very minor things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Just needs a patch bump to 0.6.15 since this doesn't break any existing functionality. I'll make a release once that's done and tests have passed.
Thanks very much for this contribution!
To clarify, do you mean I should update the version number in Project.toml? |
Yes please :) |
This addresses #134, adding a
SparseFiniteGP
type that wraps two GPs for the observation and inducing points and can be plugged into Turing. The PR contains constructors and basic methods for the new type, mostly falling through to the underlyingFiniteGP
for the observation points. It provides alogpdf
function that wraps the preexistingelbo
method for sparse GPs, and also provides a convenience constructor forPseudoObs
so that the following "just works":There are a couple of remaining issues/questions:
SparseFiniteGP
by doingfx = f(x, xu)
(using the variables from the example above). However, this conflicts with this constructor forFiniteGP
, where the secondAbstractArray
in the function signature is interpreted as the diagonal of the observation covariance matrix. I like that compact definition, but using it would likely mean a breaking change (i.e. requiring users writef(x, Diagonal(σ²))
instead off(x, σ²)
).cov(fx::SparseFiniteGP)
throw an error instead of calculating the covariance of the observation GP. This follows behavior in base for e.g.inv(A::SparseArrays.AbstractSparseMatrixCSC)
. The use case for a sparse approximation is by default one where you don't want to or can't use the full covariance matrix, so I think this is the safe and fast option for this behavior. If the user really wants the dense covariance matrix, she can docov(f.fobs)
.Comments on code, tests, organization, or style welcome...