Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds Monte Carlo Samplers #340

Merged
merged 44 commits into from
Sep 2, 2024
Merged
Changes from 1 commit
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
89ae3e6
feat: initial Monte Carlo classes
BradyPlanden May 24, 2024
3c48f88
feat: updt __init__.py, add LogPosterior
BradyPlanden May 24, 2024
a62a126
tests: add unit tests for MCMC samplers
BradyPlanden Jun 3, 2024
90bb173
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jun 3, 2024
fbe56a4
fix parallel for windows
BradyPlanden Jun 3, 2024
8f74b6d
tests: additional unit tests, refactors priors class
BradyPlanden Jun 3, 2024
2d83315
tests: increase coverage, adds monte carlo integration test
BradyPlanden Jun 5, 2024
5ed3d23
tests: increase coverage, bugfix multi_log_pdf logic
BradyPlanden Jun 5, 2024
ca961a2
tests: increase coverage, update priors on intesampling integration t…
BradyPlanden Jun 5, 2024
da21506
tests: increment coverage, refactor prior np.inf catch
BradyPlanden Jun 5, 2024
ce1cb54
refactor: removes redundant code
BradyPlanden Jun 5, 2024
c86531b
Merge branch 'develop', updts for Parameters class
BradyPlanden Jun 7, 2024
3e4c01e
refactor: adds improvements from parameters class
BradyPlanden Jun 7, 2024
b5ec8fe
feat: Adds burn-in functionality for sampling class
BradyPlanden Jun 15, 2024
f71bf6a
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jun 17, 2024
1f7c6cb
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jun 18, 2024
c8db9f5
fix: correct sigma0 to cov0
BradyPlanden Jun 18, 2024
eaaebb2
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jul 3, 2024
fb97b5d
refactor: move general methods into parent class, replace burn_in wit…
BradyPlanden Jul 3, 2024
942dc5e
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jul 4, 2024
990c590
Apply suggestions from code review
BradyPlanden Jul 4, 2024
5f89231
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jul 16, 2024
74b1f30
Merge branch 'develop' into monte-carlo-methods
BradyPlanden Jul 16, 2024
13aa83f
refactor: log_pdf to base class, update docstrings.
BradyPlanden Jul 21, 2024
0b32889
Adds catches and initialisation for x0, update tests
BradyPlanden Jul 21, 2024
0117066
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 6, 2024
251f86f
feat: updates for transformation class, cleanup
BradyPlanden Aug 7, 2024
5bb4d94
fix: uniformly apply bound transformations, update LogPosterior
BradyPlanden Aug 7, 2024
a5244a4
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 7, 2024
255aa5d
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 7, 2024
c9946da
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 13, 2024
cd07072
refactor: ComposedLogPrior -> JointLogPrior, prior.evaluateS1 -> logp…
BradyPlanden Aug 13, 2024
08dc407
fix: update tests for low convergence sampler
BradyPlanden Aug 13, 2024
c225065
refactor: update priors, refactor JointLogPrior
BradyPlanden Aug 14, 2024
4df0885
tests: update unit tests and increase coverage.
BradyPlanden Aug 14, 2024
e50812a
refactor: base_sampler init, update docstrings, update tests, remove …
BradyPlanden Aug 14, 2024
7a000cf
tests: increase coverage, remove redundant ValueError, sampler.chains…
BradyPlanden Aug 14, 2024
711dcc8
tests: restore parallel optimisation with thread limit to 1
BradyPlanden Aug 14, 2024
df1cc73
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 22, 2024
bca3bbb
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 22, 2024
248b161
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden Aug 29, 2024
503af19
Refactor and bugfixes. Adds gradient-based integration sampling tests…
BradyPlanden Aug 29, 2024
85e1ce1
Remainder review suggestions, update assert tolerances, small array d…
BradyPlanden Aug 30, 2024
8a928af
tests: increment iterations from scheduled test run
BradyPlanden Sep 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Merge branch 'refs/heads/develop' into monte-carlo-methods
BradyPlanden committed Aug 7, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 255aa5df30baf1fe38571094981ce08fe19a5abf
7 changes: 3 additions & 4 deletions examples/scripts/mcmc_example.py
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@
]
# * 2
)
values = synth_model.predict(init_soc=init_soc, experiment=experiment)
values = synth_model.predict(initial_state={"Initial SoC": 1.0}, experiment=experiment)


def noise(sigma):
@@ -52,12 +52,11 @@ def noise(sigma):
)

model = pybop.lithium_ion.SPM(parameter_set=parameter_set)
model.build(initial_state={"Initial SoC": 1.0})
signal = ["Voltage [V]", "Bulk open-circuit voltage [V]"]

# Generate problem, likelihood, and sampler
problem = pybop.FittingProblem(
model, parameters, dataset, signal=signal, init_soc=init_soc
)
problem = pybop.FittingProblem(model, parameters, dataset, signal=signal)
likelihood = pybop.GaussianLogLikelihoodKnownSigma(problem, sigma0=0.002)
prior1 = pybop.Gaussian(0.7, 0.1)
prior2 = pybop.Gaussian(0.6, 0.1)
10 changes: 6 additions & 4 deletions tests/integration/test_monte_carlo.py
Original file line number Diff line number Diff line change
@@ -76,7 +76,7 @@ def spm_likelihood(self, model, parameters, cost_class, init_soc):
)

# Define the cost to optimise
problem = pybop.FittingProblem(model, parameters, dataset, init_soc=init_soc)
problem = pybop.FittingProblem(model, parameters, dataset)
return cost_class(problem, sigma0=0.002)

@pytest.mark.parametrize(
@@ -124,9 +124,9 @@ def test_sampling_spm(self, quick_sampler, spm_likelihood):
max_iterations=400,
)
results = sampler.run()
x = np.mean(results, axis=1)

# Compute mean of posteriors and udate assert below
# compute mean of posterior and assert
x = np.mean(results, axis=1)
for i in range(len(x)):
np.testing.assert_allclose(x[i], self.ground_truth, atol=2.5e-2)

@@ -145,5 +145,7 @@ def get_data(self, model, x, init_soc):
),
]
)
sim = model.predict(init_soc=init_soc, experiment=experiment)
sim = model.predict(
initial_state={"Initial SoC": init_soc}, experiment=experiment
)
return sim
4 changes: 3 additions & 1 deletion tests/unit/test_likelihoods.py
Original file line number Diff line number Diff line change
@@ -128,8 +128,10 @@ def test_gaussian_log_likelihood(self, one_signal_problem):
grad_result, grad_likelihood = likelihood.evaluateS1(np.array([0.8, 0.2]))
assert isinstance(result, float)
np.testing.assert_allclose(result, grad_result, atol=1e-5)
# Since 0.8 > ground_truth, the likelihood should be decreasing
assert grad_likelihood[0] <= 0
assert grad_likelihood[1] >= 0
# Since sigma < 0.5, the likelihood should be decreasing
assert grad_likelihood[1] <= 0

# Test construction with sigma as a Parameter
sigma = pybop.Parameter("sigma", prior=pybop.Uniform(0.4, 0.6))
12 changes: 6 additions & 6 deletions tests/unit/test_posterior.py
Original file line number Diff line number Diff line change
@@ -36,8 +36,8 @@ def experiment(self):

@pytest.fixture
def dataset(self, model, experiment, ground_truth):
model.parameter_set = model.pybamm_model.default_parameter_values
model.parameter_set.update(
model._parameter_set = model.pybamm_model.default_parameter_values
model._parameter_set.update(
{
"Negative electrode active material volume fraction": ground_truth,
}
@@ -53,7 +53,7 @@ def dataset(self, model, experiment, ground_truth):

@pytest.fixture
def one_signal_problem(self, model, parameters, dataset):
return pybop.FittingProblem(model, parameters, dataset, init_soc=1.0)
return pybop.FittingProblem(model, parameters, dataset)

@pytest.fixture
def likelihood(self, one_signal_problem):
@@ -96,12 +96,12 @@ def posterior(self, likelihood, prior):
def test_log_posterior(self, posterior):
# Test log posterior
x = np.array([0.50])
assert np.allclose(posterior(x), -3318.34, atol=2e-2)
assert np.allclose(posterior(x), 51.5236, atol=2e-2)

# Test log posterior evaluateS1
p, dp = posterior.evaluateS1(x)
assert np.allclose(p, -3318.34, atol=2e-2)
assert np.allclose(dp, -1736.05, atol=2e-2)
assert np.allclose(p, 51.5236, atol=2e-2)
assert np.allclose(dp, 2.0, atol=2e-2)

# Get log likelihood and log prior
likelihood = posterior.likelihood()

Unchanged files with check annotations Beta

float
The value(s) of the first derivative at x.
"""
raise NotImplementedError

Check warning on line 205 in pybop/parameters/priors.py

Codecov / codecov/patch

pybop/parameters/priors.py#L205

Added line #L205 was not covered by tests
def verify(self, x):
"""
The value(s) of the first derivative at x.
"""
if not isinstance(x, np.ndarray):
x = np.asarray(x)

Check warning on line 292 in pybop/parameters/priors.py

Codecov / codecov/patch

pybop/parameters/priors.py#L292

Added line #L292 was not covered by tests
return self(x), -(x - self.loc) * self._multip
self.parameters = log_pdf.parameters
if x0 is None:
self._x0 = self.parameters.initial_value()

Check warning on line 32 in pybop/samplers/__init__.py

Codecov / codecov/patch

pybop/samplers/__init__.py#L32

Added line #L32 was not covered by tests
elif not isinstance(x0, np.ndarray):
try:
self._x0 = np.asarray(x0)
except ValueError as e:
raise ValueError(f"Error initialising x0: {e}")

Check warning on line 37 in pybop/samplers/__init__.py

Codecov / codecov/patch

pybop/samplers/__init__.py#L36-L37

Added lines #L36 - L37 were not covered by tests
def run(self) -> np.ndarray:
"""
logging.info("Using " + str(self._samplers[0].name()))
logging.info("Generating " + str(self._n_chains) + " chains.")
if self._parallel:
logging.info(

Check warning on line 148 in pybop/samplers/__init__.py

Codecov / codecov/patch

pybop/samplers/__init__.py#L148

Added line #L148 was not covered by tests
f"Running in parallel with {self._n_workers} worker processes."
)
else:
logging.info("Running in sequential mode.")
if self._chain_files:
logging.info("Writing chains to " + self._chain_files[0] + " etc.")

Check warning on line 154 in pybop/samplers/__init__.py

Codecov / codecov/patch

pybop/samplers/__init__.py#L154

Added line #L154 was not covered by tests
if self._evaluation_files:
logging.info(

Check warning on line 156 in pybop/samplers/__init__.py

Codecov / codecov/patch

pybop/samplers/__init__.py#L156

Added line #L156 was not covered by tests
"Writing evaluations to " + self._evaluation_files[0] + " etc."
)
# Check initial conditions
if self._x0.size != self.n_parameters:
raise ValueError("x0 must have the same number of parameters as log_pdf")

Check warning on line 92 in pybop/samplers/base_mcmc.py

Codecov / codecov/patch

pybop/samplers/base_mcmc.py#L92

Added line #L92 was not covered by tests
if len(self._x0) != self._n_chains:
self._x0 = np.tile(self._x0, (self._n_chains, 1))
else:
self._n_samplers = 1
self._samplers = [sampler(self._n_chains, self._x0, self._cov0)]
except Exception as e:
raise ValueError(f"Error constructing samplers: {e}") from e

Check warning on line 108 in pybop/samplers/base_mcmc.py

Codecov / codecov/patch

pybop/samplers/base_mcmc.py#L107-L108

Added lines #L107 - L108 were not covered by tests
# Check for sensitivities from sampler and set evaluation
self._needs_sensitivities = self._samplers[0].needs_sensitivities()
if self._chains_in_memory:
self._samples[:, self._iteration] = ys_store
else:
self._samples = ys_store

Check warning on line 246 in pybop/samplers/base_mcmc.py

Codecov / codecov/patch

pybop/samplers/base_mcmc.py#L246

Added line #L246 was not covered by tests
es = []
for i, _y in enumerate(ys):
f = [pdf.evaluateS1 for pdf in f]
if self._parallel:
if not self._multi_log_pdf:
self._n_workers = min(self._n_workers, self._n_chains)
return ParallelEvaluator(f, n_workers=self._n_workers)

Check warning on line 286 in pybop/samplers/base_mcmc.py

Codecov / codecov/patch

pybop/samplers/base_mcmc.py#L284-L286

Added lines #L284 - L286 were not covered by tests
else:
return (
SequentialEvaluator(f)
):
object.__setattr__(self, name, value)
else:
setattr(self.sampler, name, value)

Check warning on line 110 in pybop/samplers/mcmc_sampler.py

Codecov / codecov/patch

pybop/samplers/mcmc_sampler.py#L110

Added line #L110 was not covered by tests
You are viewing a condensed version of this merge commit. You can view the full changes here.