Skip to content

Commit

Permalink
Merge branch 'main' into reduce_storage
Browse files Browse the repository at this point in the history
  • Loading branch information
sethaxen committed Dec 10, 2024
2 parents a83d165 + d0ec5f0 commit 8dd5b9f
Show file tree
Hide file tree
Showing 22 changed files with 220 additions and 81 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
env:
JULIA_NUM_THREADS: ${{ matrix.num_threads }}
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v4
- uses: codecov/codecov-action@v5
with:
files: lcov.info
token: ${{ secrets.CODECOV_TOKEN }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/IntegrationTests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,6 @@ jobs:
Pkg.instantiate()
include(joinpath(test_path, "runtests.jl"))'
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v4
- uses: codecov/codecov-action@v5
with:
files: lcov.info
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Pathfinder"
uuid = "b1d3bc72-d0e7-4279-b92f-7fa5d6d2d454"
authors = ["Seth Axen <[email protected]> and contributors"]
version = "0.9.6"
version = "0.9.7"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand Down Expand Up @@ -40,7 +40,7 @@ ADTypes = "0.2, 1"
Accessors = "0.1.12"
Distributions = "0.25.87"
DynamicHMC = "3.4.0"
DynamicPPL = "0.25.2, 0.27, 0.28, 0.29, 0.30"
DynamicPPL = "0.25.2, 0.27, 0.28, 0.29, 0.30, 0.31"
Folds = "0.2.9"
ForwardDiff = "0.10.19"
IrrationalConstants = "0.1.1, 0.2"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[![Coverage](https://codecov.io/gh/mlcolab/Pathfinder.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/mlcolab/Pathfinder.jl)
[![Code Style: Blue](https://img.shields.io/badge/code%20style-blue-4495d1.svg)](https://github.com/invenia/BlueStyle)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5914976.svg)](https://doi.org/10.5281/zenodo.5914976)
[![DOI](https://zenodo.org/badge/417810442.svg)](https://doi.org/10.5281/zenodo.5914975)


An implementation of Pathfinder, [^Zhang2021] a variational method for initializing Markov chain Monte Carlo (MCMC) methods.
Expand Down
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
AdvancedHMC = "0bf59076-c3b1-5ca4-86bd-e02cd72cde3d"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterCitations = "daee34ce-89f3-4625-b898-19384cb65244"
DocumenterInterLinks = "d12716ef-a0f6-4df4-a9f1-a5a34e75c656"
DynamicHMC = "bbc10e6e-7c05-544b-b16e-64fede858acb"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand All @@ -20,6 +22,8 @@ Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
ADTypes = "0.2, 1"
AdvancedHMC = "0.6"
Documenter = "1"
DocumenterCitations = "1.2"
DocumenterInterLinks = "1"
DynamicHMC = "3.4.0"
ForwardDiff = "0.10.19"
LogDensityProblems = "2.1.0"
Expand Down
11 changes: 11 additions & 0 deletions docs/inventories/Distributions.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# DocInventory version 1
project = "Distributions.jl"
version = "0.25.113"

# Filtered to just the types we link to
[[jl.type]]
name = "Distributions.MixtureModel"
uri = "mixture/#$"
[[jl.type]]
name = "Distributions.MvNormal"
uri = "multivariate/#$"
13 changes: 13 additions & 0 deletions docs/inventories/DynamicHMC.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# DocInventory version 1
project = "DynamicHMC.jl"
version = "3.4.7"

# Filtered to just the types we link to
[[std.doc]]
dispname = "A worked example"
name = "worked_example"
uri = "worked_example/"

[[jl.type]]
name = "DynamicHMC.GaussianKineticEnergy"
uri = "interface/#$"
11 changes: 11 additions & 0 deletions docs/inventories/Transducers.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# DocInventory version 1
project = "Transducers.jl"
version = "0.4.84"

# Filtered to just the types we link to
[[jl.type]]
name = "Transducers.PreferParallel"
uri = "reference/manual/#$"
[[jl.type]]
name = "Transducers.SequentialEx"
uri = "reference/manual/#$"
37 changes: 36 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,41 @@
using Pathfinder
using Documenter
using DocumenterCitations
using DocumenterInterLinks

DocMeta.setdocmeta!(Pathfinder, :DocTestSetup, :(using Pathfinder); recursive=true)

bib = CitationBibliography(joinpath(@__DIR__, "src", "references.bib"); style=:numeric)

links = InterLinks(
"AdvancedHMC" => "https://turinglang.org/AdvancedHMC.jl/stable/",
"ADTypes" => "https://sciml.github.io/ADTypes.jl/stable/",
"Distributions" => (
"https://juliastats.org/Distributions.jl/stable/",
"https://juliastats.org/Distributions.jl/dev/objects.inv",
joinpath(@__DIR__, "inventories", "Distributions.toml"),
),
"DynamicHMC" => (
"https://www.tamaspapp.eu/DynamicHMC.jl/stable/",
"https://www.tamaspapp.eu/DynamicHMC.jl/dev/objects.inv",
joinpath(@__DIR__, "inventories", "DynamicHMC.toml"),
),
"DynamicPPL" => "https://turinglang.org/DynamicPPL.jl/stable/",
"LogDensityProblems" => "https://www.tamaspapp.eu/LogDensityProblems.jl/stable/",
"MCMCChains" => (
"https://turinglang.org/MCMCChains.jl/stable/",
"https://turinglang.org/MCMCChains.jl/dev/objects.inv",
),
"Optim" => "https://julianlsolvers.github.io/Optim.jl/stable/",
"Optimization" => "https://docs.sciml.ai/Optimization/stable/",
"PSIS" => "https://julia.arviz.org/PSIS/stable/",
"Transducers" => (
"https://juliafolds2.github.io/Transducers.jl/stable/", # not built for a while
"https://juliafolds2.github.io/Transducers.jl/dev/objects.inv",
joinpath(@__DIR__, "inventories", "Transducers.toml"),
),
)

makedocs(;
modules=[Pathfinder],
authors="Seth Axen <[email protected]> and contributors",
Expand All @@ -11,7 +44,7 @@ makedocs(;
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://mlcolab.github.io/Pathfinder.jl",
assets=String[],
assets=String["assets/citations.css"],
),
pages=[
"Home" => "index.md",
Expand All @@ -21,7 +54,9 @@ makedocs(;
"Initializing HMC" => "examples/initializing-hmc.md",
"Turing usage" => "examples/turing.md",
],
"References" => "references.md",
],
plugins=[bib, links],
)

if get(ENV, "DEPLOY_DOCS", "true") == "true"
Expand Down
29 changes: 29 additions & 0 deletions docs/src/assets/citations.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
/* Adapted from DocumenterCitations.jl's docs */
.citation dl {
display: grid;
grid-template-columns: max-content auto;
}

.citation dt {
grid-column-start: 1;
}

.citation dd {
grid-column-start: 2;
margin-bottom: 0.75em;
}

.citation ul {
padding: 0 0 2.25em 0;
margin: 0;
list-style: none !important;
}

.citation ul li {
text-indent: -2.25em;
margin: 0.33em 0.5em 0.5em 2.25em;
}

.citation ol li {
padding-left: 0.75em;
}
10 changes: 4 additions & 6 deletions docs/src/examples/initializing-hmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ When using MCMC to draw samples from some target distribution, there is often a
2. adapt any tunable parameters of the MCMC sampler (optional)

While (1) often happens fairly quickly, (2) usually requires a lengthy exploration of the typical set to iteratively adapt parameters suitable for further exploration.
An example is the widely used windowed adaptation scheme of Hamiltonian Monte Carlo (HMC) in Stan, where a step size and positive definite metric (aka mass matrix) are adapted.[^1]
An example is the widely used windowed adaptation scheme of Hamiltonian Monte Carlo (HMC) in Stan [StanHMCParameters](@citep), where a step size and positive definite metric (aka mass matrix) are adapted.
For posteriors with complex geometry, the adaptation phase can require many evaluations of the gradient of the log density function of the target distribution.

Pathfinder can be used to initialize MCMC, and in particular HMC, in 3 ways:
Expand Down Expand Up @@ -82,7 +82,7 @@ nothing # hide

## DynamicHMC.jl

To use DynamicHMC, we first need to transform our model to an unconstrained space using [TransformVariables.jl](https://tamaspapp.eu/TransformVariables.jl/stable/) and wrap it in a type that implements the [LogDensityProblems.jl](https://github.com/tpapp/LogDensityProblems.jl) interface:
To use DynamicHMC, we first need to transform our model to an unconstrained space using [TransformVariables.jl](https://tamaspapp.eu/TransformVariables.jl/stable/) and wrap it in a type that implements the [LogDensityProblems interface](@extref LogDensityProblems log-density-api) (see [DynamicHMC's worked example](@extref DynamicHMC worked_example)):

```@example 1
using DynamicHMC, ForwardDiff, LogDensityProblems, LogDensityProblemsAD, TransformVariables
Expand Down Expand Up @@ -123,7 +123,7 @@ result_dhmc1 = mcmc_with_warmup(

### Initializing metric adaptation from Pathfinder's estimate

To start with Pathfinder's inverse metric estimate, we just need to initialize a `GaussianKineticEnergy` object with it as input:
To start with Pathfinder's inverse metric estimate, we just need to initialize a [`DynamicHMC.GaussianKineticEnergy`](@extref) object with it as input:

```@example 1
result_dhmc2 = mcmc_with_warmup(
Expand Down Expand Up @@ -212,7 +212,7 @@ samples_ahmc2, stats_ahmc2 = sample(

### Use Pathfinder's metric estimate for sampling

To use Pathfinder's metric with no metric adaptation, we need to use Pathfinder's own `RankUpdateEuclideanMetric` type, which just wraps our inverse metric estimate for use with AdvancedHMC:
To use Pathfinder's metric with no metric adaptation, we need to use Pathfinder's own [`Pathfinder.RankUpdateEuclideanMetric`](@ref) type, which just wraps our inverse metric estimate for use with AdvancedHMC:

```@example 1
nadapts = 75
Expand All @@ -233,5 +233,3 @@ samples_ahmc3, stats_ahmc3 = sample(
progress=false,
)
```

[^1]: https://mc-stan.org/docs/reference-manual/hmc-algorithm-parameters.html
8 changes: 4 additions & 4 deletions docs/src/examples/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ Now we will run Pathfinder on the following banana-shaped distribution with dens
\pi(x_1, x_2) = e^{-x_1^2 / 2} e^{-5 (x_2 - x_1^2)^2 / 2}.
```

Pathfinder can also take any object that implements the [LogDensityProblems](https://www.tamaspapp.eu/LogDensityProblems.jl) interface.
Pathfinder can also take any object that implements the [LogDensityProblems interface](@extref LogDensityProblems log-density-api) interface.
This can also be used to manually define the gradient of the log-density function.

First we define the log density problem:
Expand Down Expand Up @@ -185,9 +185,9 @@ result = multipathfinder(prob_banana, ndraws; nruns=20, init_scale=10)
`result` is a [`MultiPathfinderResult`](@ref).
See its docstring for a description of its fields.

`result.fit_distribution` is a uniformly-weighted `Distributions.MixtureModel`.
`result.fit_distribution` is a uniformly-weighted [`Distributions.MixtureModel`](@extref).
Each component is the result of a single Pathfinder run.
It's possible that some runs fit the target distribution much better than others, so instead of just drawing samples from `result.fit_distribution`, `multipathfinder` draws many samples from each component and then uses Pareto-smoothed importance resampling (using [PSIS.jl](https://psis.julia.arviz.org/stable/)) from these draws to better target `prob_banana`.
It's possible that some runs fit the target distribution much better than others, so instead of just drawing samples from `result.fit_distribution`, `multipathfinder` draws many samples from each component and then uses Pareto-smoothed importance resampling (using [PSIS.jl](@extref PSIS PSIS)) from these draws to better target `prob_banana`.

The Pareto shape diagnostic informs us on the quality of these draws.
Here the Pareto shape ``k`` diagnostic is bad (``k > 0.7``), which tells us that these draws are unsuitable for computing posterior estimates, so we should definitely run MCMC to get better draws.
Expand Down Expand Up @@ -238,7 +238,7 @@ nothing # hide

First, let's fit this posterior with single-path Pathfinder.
For high-dimensional problems, it's better to use reverse-mode automatic differentiation.
Here, we'll use `ADTypes.AutoReverseDiff()` to specify that [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl) should be used.
Here, we'll use [`ADTypes.AutoReverseDiff`](@extref) to specify that [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl) should be used.


```@example 1
Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/turing.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ model = regress(collect(x), y)
n_chains = 8
```

For convenience, [`pathfinder`](@ref) and [`multipathfinder`](@ref) can take Turing models as inputs and produce `MCMCChains.Chains` objects as outputs.
For convenience, [`pathfinder`](@ref) and [`multipathfinder`](@ref) can take Turing models as inputs and produce [`MCMCChains.Chains`](@extref) objects as outputs.

```@example 1
result_single = pathfinder(model; ndraws=1_000)
Expand All @@ -36,7 +36,7 @@ result_multi = multipathfinder(model, 1_000; nruns=n_chains)

Here, the Pareto shape diagnostic indicates that it is likely safe to use these draws to compute posterior estimates.

When passed a `Model`, Pathfinder also gives access to the posterior draws in a familiar `MCMCChains.Chains` object.
When passed a [`DynamicPPL.Model`](@extref), Pathfinder also gives access to the posterior draws in a familiar `Chains` object.

```@example 1
result_multi.draws_transformed
Expand Down
7 changes: 1 addition & 6 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = Pathfinder

# Pathfinder.jl: Parallel quasi-Newton variational inference

This package implements Pathfinder, [^Zhang2021] a variational method for initializing Markov chain Monte Carlo (MCMC) methods.
This package implements Pathfinder [ZhangPathfinder2021](@citep), a variational method for initializing Markov chain Monte Carlo (MCMC) methods.

## Single-path Pathfinder

Expand Down Expand Up @@ -45,8 +45,3 @@ Pathfinder uses several packages for extended functionality:
- [Distributions.jl](https://juliastats.org/Distributions.jl/stable/)/[PDMats.jl](https://github.com/JuliaStats/PDMats.jl): fits can be used anywhere a `Distribution` can be used
- [LogDensityProblems.jl](https://www.tamaspapp.eu/LogDensityProblems.jl/stable/): defining the log-density function, gradient, and Hessian
- [ProgressLogging.jl](https://julialogging.github.io/ProgressLogging.jl/stable/): In Pluto, Juno, and VSCode, nested progress bars are shown. In the REPL, use TerminalLoggers.jl to get progress bars.

[^Zhang2021]: Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari (2021).
Pathfinder: Parallel quasi-Newton variational inference.
arXiv: [2108.03782](https://arxiv.org/abs/2108.03782) [stat.ML].
[Code](https://github.com/LuZhangstat/Pathfinder)
34 changes: 34 additions & 0 deletions docs/src/references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
@article{Byrd1994,
title = {Representations of Quasi-{{Newton}} Matrices and Their Use in Limited Memory Methods},
author = {Byrd, Richard H. and Nocedal, Jorge and Schnabel, Robert B.},
year = {1994},
month = jan,
journal = {Mathematical Programming},
volume = {63},
number = {1-3},
pages = {129--156},
issn = {0025-5610, 1436-4646},
doi = {10.1007/BF01582063}
}

@misc{StanHMCParameters,
title = {Stan Reference Manual: {HMC} algorithm parameters},
urldate = {2024-12-06},
url = {https://mc-stan.org/docs/reference-manual/mcmc.html#hmc-algorithm-parameters}
}

@article{ZhangPathfinder2021,
title = {Pathfinder: {{Parallel}} Quasi-{{Newton}} Variational Inference},
shorttitle = {Pathfinder},
author = {Zhang, Lu and Carpenter, Bob and Gelman, Andrew and Vehtari, Aki},
year = {2022},
journal = {Journal of Machine Learning Research},
volume = {23},
number = {306},
pages = {1--49},
issn = {1533-7928},
urldate = {2024-12-06},
url = {http://jmlr.org/papers/v23/21-0889.html},
eprint = {2108.03782},
eprinttype = {arXiv}
}
4 changes: 4 additions & 0 deletions docs/src/references.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# References

```@bibliography
```
5 changes: 3 additions & 2 deletions ext/PathfinderTuringExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,9 @@ end
"""
draws_to_chains(model::DynamicPPL.Model, draws) -> MCMCChains.Chains
Convert a `(nparams, ndraws)` matrix of unconstrained `draws` to an `MCMCChains.Chains`
object with corresponding constrained draws and names according to `model`.
Convert a `(nparams, ndraws)` matrix of unconstrained `draws` to a
[`MCMCChains.Chains`](@extref) object with corresponding constrained draws and names
according to `model`.
"""
function draws_to_chains(model::DynamicPPL.Model, draws::AbstractMatrix)
varinfo = DynamicPPL.link(DynamicPPL.VarInfo(model), model)
Expand Down
3 changes: 2 additions & 1 deletion src/integration/advancedhmc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ using .Random
"""
RankUpdateEuclideanMetric{T,M} <: AdvancedHMC.AbstractMetric
A Gaussian Euclidean metric whose inverse is constructed by rank-updates.
A Gaussian Euclidean [metric](@extref AdvancedHMC Hamiltonian-mass-matrix-(metric)) whose
inverse is constructed by rank-updates.
# Constructors
Expand Down
9 changes: 4 additions & 5 deletions src/lbfgs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ Compute approximate inverse Hessian initialized from history stored in `cache` a
``B₀`` and ``D₀``, which are overwritten here and are used in the construction of the
returned approximate inverse Hessian ``H^{-1}``.
From Theorem 2.2 of [^Byrd1994], the expression for the inverse Hessian ``H^{-1}`` is
From [Byrd1994; Theorem 2.2](@citet), the expression for the inverse Hessian ``H^{-1}`` is
```math
\\begin{align}
Expand All @@ -162,10 +162,9 @@ H^{-1} &= H_0^{-1} + B D B^\\mathrm{T}
\\end{align}
```
[^Byrd1994]: Byrd, R.H., Nocedal, J. & Schnabel, R.B.
Representations of quasi-Newton matrices and their use in limited memory methods.
Mathematical Programming 63, 129–156 (1994).
doi: [10.1007/BF01582063](https://doi.org/10.1007/BF01582063)
# References
- [Byrd1994](@cite): Byrd et al. Math. Program. 63, 1994.
"""
function lbfgs_inverse_hessian!(cache::LBFGSInverseHessianCache, history::LBFGSHistory)
(; B, D, diag_invH0) = cache
Expand Down
Loading

0 comments on commit 8dd5b9f

Please sign in to comment.