Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc: Fix links, typos, grammer, punctuation in doc/example/amici.ipynb #1248

Merged
merged 3 commits into from
Dec 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 25 additions & 25 deletions doc/example/amici.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"\n",
"In order to run optimizations and/or uncertainty analysis, we turn to pyPESTO (**P**arameter **ES**timation **TO**olbox for python).\n",
"\n",
"pyPESTO is python tool for parameter estimation. It provides an interface to the model simulation tool [AMICI](https://github.com/AMICI-dev/AMICI) for the simulation of Ordinary Differential Equation (ODE) models specified in the SBML format. With it we can optimize our model parameters given measurement data, we can do uncertainty analysis via profile likelihoods and/or through sampling methods. pyPESTO provides an interface to many optimizers, global and local, such as e.g. SciPy optimizers, Fides and Pyswarm. Additionally it interfaces samplers such as pymc, emcee and some of its own samplers."
"pyPESTO is a Python tool for parameter estimation. It provides an interface to the model simulation tool [AMICI](https://github.com/AMICI-dev/AMICI) for the simulation of Ordinary Differential Equation (ODE) models specified in the SBML format. With it, we can optimize our model parameters given measurement data, we can do uncertainty analysis via profile likelihoods and/or through sampling methods. pyPESTO provides an interface to many optimizers, global and local, such as e.g. SciPy optimizers, Fides and Pyswarm. Additionally, it interfaces samplers such as pymc, emcee and some of its own samplers."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -117,7 +117,7 @@
{
"cell_type": "markdown",
"source": [
"In this example, we want to specify fixed parameters, observables and a $\\sigma$ parameter. Unfortunately, the latter two are not part of the [SBML standard](http://sbml.org/). However, they can be provided to `amici.SbmlImporter.sbml2amici` as demonstrated in the following."
"In this example, we want to specify fixed parameters, observables and a $\\sigma$ parameter. Unfortunately, the latter two are not part of the [SBML standard](https://sbml.org/). However, they can be provided to `amici.SbmlImporter.sbml2amici` as demonstrated in the following."
],
"metadata": {
"collapsed": false,
Expand All @@ -131,7 +131,7 @@
"source": [
"#### Constant parameters\n",
"\n",
"Constant parameters, i.e. parameters with respect to which no sensitivities are to be computed (these are often parameters specifying a certain experimental condition) are provided as a list of parameter names."
"Constant parameters, i.e., parameters with respect to which no sensitivities are to be computed (these are often parameters specifying a certain experimental condition) are provided as a list of parameter names."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -159,7 +159,7 @@
"source": [
"#### Observables\n",
"\n",
"We used SBML's [`AssignmentRule`](http://sbml.org/Software/libSBML/5.13.0/docs//python-api/classlibsbml_1_1_rule.html) as a non-standard way to specify *Model outputs* within the SBML file. These rules need to be removed prior to the model import (AMICI does at this time not support these rules). This can be easily done using `amici.assignmentRules2observables()`.\n",
"We used SBML's [`AssignmentRule`](https://sbml.org/software/libsbml/5.18.0/docs/formatted/python-api/classlibsbml_1_1_assignment_rule.html) as a non-standard way to specify *Model outputs* within the SBML file. These rules need to be removed prior to the model import (AMICI does at this time not support these rules). This can be easily done using `amici.assignmentRules2observables()`.\n",
"\n",
"In this example, we introduced parameters named `observable_*` as targets of the observable AssignmentRules. Where applicable we have `observable_*_sigma` parameters for $\\sigma$ parameters (see below)."
],
Expand Down Expand Up @@ -618,7 +618,7 @@
{
"cell_type": "markdown",
"source": [
"We can now call the objective function directly for any parameter. The value that is put out is the likelihood function. If we want to interact more with the AMICI returns, we can also return this by call and e.g. retrieve the chi2 value."
"We can now call the objective function directly for any parameter. The value that is put out is the likelihood function. If we want to interact more with the AMICI returns, we can also return this by call and e.g., retrieve the chi2 value."
],
"metadata": {
"collapsed": false,
Expand All @@ -641,7 +641,7 @@
}
],
"source": [
"# the generic objective call ## Add what we print\n",
"# the generic objective call\n",
"print(f\"Objective value: {objective(benchmark_parameters)}\")\n",
"# a call returning the AMICI data as well\n",
"obj_call_with_dict = objective(benchmark_parameters, return_dict=True)\n",
Expand All @@ -659,7 +659,7 @@
{
"cell_type": "markdown",
"source": [
"Now this makes the whole process already somewhat easier, but still, getting here took us quite some coding and effort. This will only get more complicated, the more complex the model is. Therefore in the next part we will show you how to bypass the tedious lines of code by using PEtab ."
"Now this makes the whole process already somewhat easier, but still, getting here took us quite some coding and effort. This will only get more complicated, the more complex the model is. Therefore, in the next part, we will show you how to bypass the tedious lines of code by using PEtab."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -689,9 +689,9 @@
"\n",
"pyPESTO supports the [PEtab](https://github.com/PEtab-dev/PEtab) standard. PEtab is a data format for specifying parameter estimation problems in systems biology.\n",
"\n",
"A PEtab problem consist of an [SBML](https://sbml.org) file, defining the model topology and a set of `.tsv` files, defining experimental conditions, observables, measurements and parameters (and their optimization bounds, scale, priors...). All files, that make up a PEtab problem, can be structured in a `.yaml` file. The `pypesto.Objective` comming from a PEtab problem corresponds to the negative-log-likelihood/negative-log-posterior disrtibution of the parameters.\n",
"A PEtab problem consist of an [SBML](https://sbml.org) file, defining the model topology and a set of `.tsv` files, defining experimental conditions, observables, measurements and parameters (and their optimization bounds, scale, priors...). All files that make up a PEtab problem can be structured in a `.yaml` file. The `pypesto.Objective` coming from a PEtab problem corresponds to the negative-log-likelihood/negative-log-posterior distribution of the parameters.\n",
"\n",
"For more details on PEtab, the interested reader is refered to [PEtab's format definition](https://petab.readthedocs.io/en/latest/documentation_data_format.html), for examples the reader is refered to the [PEtab benchmark collection](https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab). The Model from _[Böhm et al. JProteomRes 2014](https://pubs.acs.org/doi/abs/10.1021/pr5006923)_ is part of the benchmark collection and will be used as the running example throughout this notebook.\n"
"For more details on PEtab, the interested reader is referred to [PEtab's format definition](https://petab.readthedocs.io/en/latest/documentation_data_format.html), for examples the reader is referred to the [PEtab benchmark collection](https://github.com/Benchmarking-Initiative/Benchmark-Models-PEtab). The Model from _[Böhm et al. JProteomRes 2014](https://pubs.acs.org/doi/abs/10.1021/pr5006923)_ is part of the benchmark collection and will be used as the running example throughout this notebook.\n"
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -822,7 +822,7 @@
{
"cell_type": "markdown",
"source": [
"This was really straightforward. With this we are still able to do all the same things we did before and also adjust solver setting, change the model etc."
"This was really straightforward. With this, we are still able to do all the same things we did before and also adjust solver setting, change the model, etc."
],
"metadata": {
"collapsed": false,
Expand All @@ -846,7 +846,7 @@
}
],
"source": [
"# call the objective function ## fval=\n",
"# call the objective function\n",
"print(f\"Objective value: {problem.objective(benchmark_parameters)}\")\n",
"# change things in the model\n",
"problem.objective.amici_model.requireSensitivitiesForAllParameters()\n",
Expand Down Expand Up @@ -883,7 +883,7 @@
"source": [
"## 2. Optimization\n",
"\n",
"Once setup, the optimization can be done very quickly with default settings. If needed these settings can be highly individualized and change according to the needs of our model. In this section we shall go over some of these settings."
"Once setup, the optimization can be done very quickly with default settings. If needed, these settings can be highly individualized and change according to the needs of our model. In this section, we shall go over some of these settings."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -931,7 +931,7 @@
"source": [
"### Startpoint method\n",
"\n",
"The startpoint methods describes how you want to choose your starpoints, in case you do a multistart opimization. The default here is `uniform` meaning that each startpoint is a uniform sample from the allowed parameter space. The other two notable options are either `latin_hypercube` or a self defined function."
"The startpoint method describes how you want to choose your startpoints, in case you do a multistart optimization. The default here is `uniform` meaning that each startpoint is a uniform sample from the allowed parameter space. The other two notable options are either `latin_hypercube` or a self defined function."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -959,7 +959,7 @@
"source": [
"### History options\n",
"\n",
"In some cases, it is good to trace what the optimizer did in each step, i.e. the history. There is a multitude of options on what to report here, but the most important one is `trace_record` which turns the history function on and off."
"In some cases, it is good to trace what the optimizer did in each step, i.e., the history. There is a multitude of options on what to report here, but the most important one is `trace_record` which turns the history function on and off."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -988,7 +988,7 @@
"source": [
"### Optimization options\n",
"\n",
"Some further possible options for the optimization. Notably `allow_failed_starts`, which in case of a very complicated objective function, can help get to the desired number of optimizations when turned of. As we do not need this here, we create the default options."
"Some further possible options for the optimization. Notably `allow_failed_starts`, which in case of a very complicated objective function, can help get to the desired number of optimizations when turned off. As we do not need this here, we create the default options."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1263,7 +1263,7 @@
"\n",
"The waterfall plot is a visualization of the final objective function values of each start. They are sorted from small to high and then plotted. Similar values will get clustered and get the same color.\n",
"\n",
"This helps determining whether the result is reproducible and whether we reliably found a local minimum that we hope to be the golbal one."
"This helps to determine whether the result is reproducible and whether we reliably found a local minimum that we hope to be the global one."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1314,7 +1314,7 @@
"source": [
"#### Parameter overview\n",
"\n",
"Here we plot the parameters of all starts within their bounds. This can tell us whether some bounds are always hit and might need to be questioned and if the best starts are similar or differ amongst themselves, hinting already for some unidentifiabilities."
"Here we plot the parameters of all starts within their bounds. This can tell us whether some bounds are always hit and might need to be questioned and if the best starts are similar or differ amongst themselves, hinting already for some non-identifiabilities."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1351,7 +1351,7 @@
"source": [
"#### Parameter correlation plot\n",
"\n",
"To further look into possible uncertainties, we can plot the correlation of the final points. Sometimes, pairs of parameters are dependent on each other and fixing one might solve some unidentifiability."
"To further look into possible uncertainties, we can plot the correlation of the final points. Sometimes, pairs of parameters are dependent on each other and fixing one might solve some non-identifiability."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1388,7 +1388,7 @@
"source": [
"#### Parameter histogram + scatter\n",
"\n",
"In case we found some dependencies and for further investigation, can also specifically look at the histograms of certain parameters and the pairwise parameter scatter plot."
"In case we found some dependencies and for further investigation, we can also specifically look at the histograms of certain parameters and the pairwise parameter scatter plot."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1455,7 +1455,7 @@
{
"cell_type": "markdown",
"source": [
"We definitely need to look further into it, and thus we turn to uncertainty quantification in next next section."
"We definitely need to look further into it, and thus we turn to uncertainty quantification in the next section."
],
"metadata": {
"collapsed": false,
Expand All @@ -1470,7 +1470,7 @@
"## 4. Uncertainty quantification\n",
"\n",
"This mainly consists of two parts:\n",
"* Profile Liklihoods\n",
"* Profile Likelihoods\n",
"* MCMC sampling"
],
"metadata": {
Expand All @@ -1485,9 +1485,9 @@
"source": [
"### Profile likelihood\n",
"\n",
"The profile likelihood uses an optimization scheme to calculate the confidence intervals for each parameter. We start with the best found parameterset of the optimization. Then in each step, we increase/decrease the parameter of interest, fix it and then run one local optimization. We do this until we either hit the bounds or reach a sufficiently bad fit.\n",
"The profile likelihood uses an optimization scheme to calculate the confidence intervals for each parameter. We start with the best found parameter set of the optimization. Then in each step, we increase/decrease the parameter of interest, fix it and then run one local optimization. We do this until we either hit the bounds or reach a sufficiently bad fit.\n",
"\n",
"To run the profiling we do not need a lot of setup, as we did this already for the optimization."
"To run the profiling, we do not need a lot of setup, as we did this already for the optimization."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1705,7 +1705,7 @@
"source": [
"## 5. Saving results\n",
"\n",
"Lastly, the whole process took quite some time, but is not necessarily finished. It is therefore very useful, to be able to save the result as is. pyPESTO uses the hdf5 Format and with two very short commands we are able to read and write a result from and to an hdf5 file."
"Lastly, the whole process took quite some time, but is not necessarily finished. It is therefore very useful, to be able to save the result as is. pyPESTO uses the HDF5 format, and with two very short commands we are able to read and write a result from and to an HDF5 file."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1818,7 +1818,7 @@
{
"cell_type": "markdown",
"source": [
"Now we are able to quickly load the result and visualize them."
"Now we are able to quickly load the results and visualize them."
],
"metadata": {
"collapsed": false,
Expand Down
Loading
Loading