Skip to content

Commit

Permalink
Update paper.md
Browse files Browse the repository at this point in the history
text changes until 'design optimisation'
  • Loading branch information
davidhowey authored Dec 12, 2024
1 parent 38d4b87 commit c0cefbb
Showing 1 changed file with 11 additions and 12 deletions.
23 changes: 11 additions & 12 deletions joss/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,37 +147,36 @@ A generic data-fitting optimisation problem may be formulated as:
\label{eqn:parameterisation}
\end{equation}

where $\mathcal{L} : \mathbf{\theta} \mapsto [0,\infty)$ is a cost function that quantifies the agreement between the model output $\mathbf{y}(t)$ and a sequence of observations $(\hat{\mathbf{y}_i})$ measured at times $t_i$. Within the `PyBOP` framework, the `FittingProblem` class packages the model output along with the measured observations, both of which are then passed to the cost classes for the computation of the specific cost function. For gradient-based optimisers, the Jacobian of the cost function with respect to the unknown parameters, $\frac{\partial \mathcal{L}}{\partial \theta}$, is computed for step size and directional information.
where $\mathcal{L} : \mathbf{\theta} \mapsto [0,\infty)$ is a cost function that quantifies the agreement between the model output $\mathbf{y}(t)$ and a sequence of observations $(\hat{\mathbf{y}_i})$ measured at times $t_i$. Within the `PyBOP` framework, the `FittingProblem` class packages the model output along with the measured observations, both of which are then passed to the cost classes for the computation of the specific cost function. For gradient-based optimisers, the Jacobian of the cost function with respect to unknown parameters, $\partial \mathcal{L} / \partial \theta$, is computed for step-size and directional information.

Next, we demonstrate the fitting of synthetic data where the system parameters are known. In this example problem, we use `PyBaMM`'s implementation of the single particle model (SPM) with an added contact resistance submodel. We assume that the battery model is already parameterised except for two dynamic parameters, namely the lithium diffusivity of the negative electrode active material particle (denoted "negative particle diffusivity") and the contact resistance. We generate synthetic data from a one-hour discharge from 100% state of charge, to 0% (denoted as 1C rate), followed by 30 minutes of relaxation. This data is then corrupted with zero mean Gaussian noise of amplitude 2mV, shown by the dots in \autoref{fig:inference-time-landscape} (left). The initial states are assumed known, although such an assumption is not generally necessary. The `PyBOP` repository contains multiple [example notebooks](https://github.com/pybop-team/PyBOP/tree/develop/examples/notebooks) that follow a similar inference process for readers that would like further information. The underlying cost landscape to be explored by the optimiser is shown in \autoref{fig:inference-time-landscape} (right) with the initial position denoted alongside the known system parameters for this synthetic inference task. In general, the true parameters are not known a priori.
Next, we demonstrate the fitting of synthetic data where the system parameters are known. In this example, we use `PyBaMM`'s implementation of the single particle model with an added contact resistance submodel. We assume that the model is already parameterised except for two dynamic parameters, namely, the lithium diffusivity of the negative electrode active material particles (denoted "negative particle diffusivity") and the contact resistance. We generate synthetic data with a one-hour discharge from 100% to 0% state of charge, denoted as 1C rate, followed by 30 minutes of relaxation. This dataset is then corrupted with zero-mean Gaussian noise of amplitude 2 mV, with the resulting signal shown by the blue dots in \autoref{fig:inference-time-landscape} (left). The initial states are assumed known, although this assumption is not generally necessary. The `PyBOP` repository contains several other [example notebooks](https://github.com/pybop-team/PyBOP/tree/develop/examples/notebooks) that follow a similar inference process. The underlying cost landscape to be explored by the optimiser is shown in \autoref{fig:inference-time-landscape} (right), with the initial position denoted alongside the known true system parameters for this synthetic inference task. In general, the true parameters are not known.

![The cost landscape for the time-series parameterisation problem with a root mean squared error cost function. \label{fig:inference-time-landscape}](figures/joss/sim-landscape.png){ width=100% }
![The fitted synthetic dataset (left) and cost landscape (right) for an example time-series battery model parameterisation using a root-mean-squared error cost function. \label{fig:inference-time-landscape}](figures/joss/sim-landscape.png){ width=100% }

As mentioned above, `PyBOP` also provides inference and optimisation capabilities through numerical computation of the electrochemical impedance. This is based on the methods presented in the `pybamm-eis` package and allows fast impedance computation by inversion of a sparse mass matrix after scaling and subtraction of the auto-differentiated Jacobian matrix. More information about this procedure is available in [@pybamm-eis]. The \autoref{fig:impedance-landscape} below shows the numerical impedance prediction available in `PyBOP` alongside the cost landscape constructed for the inference task. At the time of publication, gradient-based optimisation and sampling methods are not available when using an impedance workflow.
In a second example, we now showcase `PyBOP`'s capability to fit electrochemical impedance data using methods within `pybamm-eis` that enable fast impedance computation of battery models [@pybamm-eis]. The \autoref{fig:impedance-landscape} below shows the numerical impedance prediction available in `PyBOP` alongside the cost landscape constructed for the inference task. At the time of publication, gradient-based optimisation and sampling methods are not available when using an impedance workflow.

![The cost landscape for the frequency domain impedance parameterisation problem with a root mean squared error cost function at 5% SOC. \label{fig:impedance-landscape}](figures/joss/impedance.png){ width=100% }
![The data and model fit (left) and cost landscape (right) for a frequency-domain impedance parameterisation with a root-mean-squared error cost function, at 5% SOC. \label{fig:impedance-landscape}](figures/joss/impedance.png){ width=100% }

To avoid over complicating this example, we will continue with identification in the time-domain; however, in general these two simulation methods can be combined for improved system excitation. As gradient information is available for this problem, the choice of distance-based cost function and optimiser is not constrained. Due to the difference in magnitude between the two parameters, we apply the logarithmic parameter transformation offered by `PyBOP`. This transforms the search space of the optimiser to allow for a common step size between the parameters, which is generally is not required, but improves convergence in this problem. As a demonstration of the parameterisation capabilities of `PyBOP`, \autoref{fig:convergence-min-max} (left) shows the rate of convergence for each of the distance-minimising cost functions, while \autoref{fig:convergence-min-max} (right) shows analogous results for maximising a likelihood. The optimisation is performed with SciPy Minimize using the gradient-based L-BFGS-B method.
To avoid overcomplicating this example, we will continue with identification in the time-domain; however, in general these two simulation methods can be combined for improved system excitation. As gradient information is available for this problem, the choice of distance-based cost function and optimiser is not constrained. Due to the difference in magnitude between the two parameters, we apply the logarithmic parameter transformation offered by `PyBOP`. This transforms the search space of the optimiser to allow for a common step size between the parameters, which is generally is not required, but improves convergence in this problem. As a demonstration of the parameterisation capabilities of `PyBOP`, \autoref{fig:convergence-min-max} (left) shows the rate of convergence for each of the distance-minimising cost functions, while \autoref{fig:convergence-min-max} (right) shows analogous results for maximising a likelihood. The optimisation is performed with SciPy Minimize using the gradient-based L-BFGS-B method.

![Convergence in the likelihood functions obtained using various likelihood functions and the L-BFGS-B algorithm. \label{fig:convergence-min-max}](figures/joss/converge.png){ width=100% }

Next, the performance of the various optimisation algorithms is presented by category: gradient-based methods in \autoref{fig:optimiser-inference} (left), evolutionary strategies in \autoref{fig:optimiser-inference} (middle) and (meta)heuristics in \autoref{fig:optimiser-inference} (right) for a mean squared error cost function. Note that the performance of the optimiser depends on the cost environment, prior information and corresponding hyperparameters for each specific problem.
Next, example performance of various optimisation algorithms is presented by category: gradient-based methods in \autoref{fig:optimiser-inference} (left), evolutionary strategies in \autoref{fig:optimiser-inference} (middle) and (meta)heuristics in \autoref{fig:optimiser-inference} (right) for a mean squared error cost function. Note that the performance of the optimiser depends on the cost environment, prior information and corresponding hyperparameters for each specific problem.

![Convergence in the parameter values obtained for the various optimisation algorithms provided by `PyBOP`. \label{fig:optimiser-inference}](figures/joss/optimisers_parameters.png){ width=100% }

![Cost landscape contour plot with corresponding optimisation traces. The three rows show the gradient-based optimisers (top), evolution strategies (middle), and (meta)heuristics (bottom). \label{fig:optimiser-inference}](figures/joss/contour_subplot.png){ width=100% }

This parameterisation task can also be approached from a Bayesian perspective, which we will present below using `PyBOP`'s sampler methods. The optimisation equation presented in \autoref{eqn:parameterisation} does not represent the Bayesian parameter identification task, and as such we introduce Bayes' theorem as,
This example parameterisation task can also be approached from a Bayesian perspective, solved using `PyBOP`'s sampler methods. First, we introduce Bayes' rule,

\begin{equation}
P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}
P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)},
\label{eqn:bayes_theorem}
\end{equation}

where, $P(\theta|D)$ is the posterior, representing the probability density function of the parameter. $P(D|\theta)$ is the likelihood function, which assesses the parameter values alongside a noise model. $P(\theta)$ encapsulates the prior knowledge about the parameters, and finally $P(D)$ is the model evidence, which acts as a normalising constant.
Our goal in parameter inference is to identify the parameter values with the highest probability, which can be represented either by a point-based metric or a posterior. The posterior distribution provides additional information about the uncertainty of the identified parameters. Monte Carlo methods are used to sample from the posterior. The selection of Monte Carlo methods available in `PyBOP` includes gradient-based methods such as No-U-Turn [@NUTS:2011] and Hamiltonian [@Hamiltonian:2011], as well as heuristic methods such as Differential Evolution [@DiffEvolution:2006], and finally conventional methods based on random sampling with rejection criteria [@metropolis:1953]. `PyBOP` offers a sampler class that provides the interface to these samplers, which are provided by the Probabilistic Inference of Noise Time-Series (`PINTS`) package. \autoref{fig:posteriors} below shows the sampled posterior for the synthetic workflow described above, using an adaptive covariance-based sampler, Haario Bardenet [@Haario:2001].
where $P(\theta|D)$ is the posterior parameter distribution, $P(D|\theta)$ is the likelihood function, $P(\theta)$ is the prior parameter distribution, and $P(D)$ is the model evidence, or marginal likelihood, which acts as a normalising constant. In the case of maximum likelihood estimation or maximum a posteriori estimation, one wishes to maximise $P(D|\theta)$ or $P(\theta|D)$, respectively, and this may be formulated as an optimisation problem as per \autoref{eqn:parameterisation}. However, to estimate the full posterior parameter distribution one must use sampling or other inference methods to reconstruct the function $P(\theta|D)$. The posterior distribution provides information about the uncertainty of the identified parameters, e.g., by calculating the variance or other moments. Monte Carlo methods are used here to sample from the posterior. The selection of Monte Carlo methods available in `PyBOP` includes gradient-based methods such as no-u-turn [@NUTS:2011] and Hamiltonian [@Hamiltonian:2011], as well as heuristic methods such as differential evolution [@DiffEvolution:2006], and also conventional methods based on random sampling with rejection criteria [@metropolis:1953]. `PyBOP` offers a sampler class that provides the interface to samplers, the latter being provided by the probabilistic inference on noisy time-series (`PINTS`) package. \autoref{fig:posteriors} below shows the sampled posteriors for the synthetic model described above, using an adaptive covariance-based sampler called Haario Bardenet [@Haario:2001].

![Posterior distributions for model parameters alongside identified noise on the observations. Shaded area denotes confidence bounds for each parameter. \label{fig:posteriors}](figures/joss/posteriors.png){ width=100% }
![Posterior distributions of model parameters alongside identified noise on the observations. Shaded area denotes credible interval for each parameter. \label{fig:posteriors}](figures/joss/posteriors.png){ width=100% }

## Design optimisation

Expand Down

0 comments on commit c0cefbb

Please sign in to comment.