Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]: Add asv benchmarking #207

Merged
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
423bb04
Inital benchmarking setup, suite
BradyPlanden Feb 19, 2024
694cd0c
Merge branch 'develop' into 179-add-airspeed-velocity-for-automated-b…
BradyPlanden Feb 19, 2024
9602190
Updt. asv config, changelog, parameterisaton benchmarks
BradyPlanden Feb 19, 2024
7d00ccf
Merge branch 'develop' into 179-add-airspeed-velocity-for-automated-b…
BradyPlanden Feb 21, 2024
7f7d6df
Add peridoci benchmark workflow and nox session
BradyPlanden Feb 21, 2024
8698a5c
updt results path for upload
BradyPlanden Feb 21, 2024
fc94e48
Increment to Python 3.12, fix typos, add `push` trigger to workflow
BradyPlanden Feb 24, 2024
1c63061
Updt. nox session name
BradyPlanden Feb 24, 2024
5540042
Updt. nox benchmarks session, tests for CI on benchmark workflow, rem…
BradyPlanden Feb 24, 2024
57d43ee
Merge branch 'develop' into 179-add-airspeed-velocity-for-automated-b…
BradyPlanden Feb 24, 2024
aceedc5
asv installation, calls
BradyPlanden Feb 24, 2024
4e3722f
Additional benchmarks, updt build wheels
BradyPlanden Mar 15, 2024
3315602
Merge branch 'develop' into 179-add-airspeed-velocity-for-automated-b…
BradyPlanden Mar 15, 2024
8f0daf1
updt permissions for deployment
BradyPlanden Mar 15, 2024
ebc7b62
add --global arg to git config
BradyPlanden Mar 15, 2024
aa9a395
Limit workflow to pybop repo
BradyPlanden Mar 16, 2024
8e6f609
Add random seed, updt branch target, increment python for publish job…
BradyPlanden Mar 16, 2024
7e530b0
Updt benchmark intial soc, add tracking of optimisation results, incr…
BradyPlanden Mar 17, 2024
2013161
Adds benchmark badge
BradyPlanden Mar 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
99 changes: 99 additions & 0 deletions .github/workflows/periodic_benchmarks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Initial Source: pybop-team/PyBop

# This workflow periodically runs the benchmarks suite in benchmarks/
# using asv and publish the results, effectively updating
# the display website hosted in the pybop-bench repo

# Steps:
# - Benchmark all commits since the last one that was benchmarked
# - Push results to pybop-bench repo
# - Publish website
name: Benchmarks
on:
push:
# Everyday at 12 pm UTC
schedule:
- cron: "0 12 * * *"
# Make it possible to trigger the
# workflow manually
workflow_dispatch:

jobs:
benchmarks:
runs-on: [self-hosted, macOS, ARM64]
BradyPlanden marked this conversation as resolved.
Show resolved Hide resolved
steps:
- uses: actions/checkout@v4

- name: Install python & create virtualenv
shell: bash
run: |
eval "$(pyenv init -)"
pyenv install 3.12 -s
pyenv virtualenv 3.12 pybop-312-bench

- name: Install dependencies & run benchmarks
shell: bash
run: |
eval "$(pyenv init -)"
pyenv activate pybop-312-bench
python -m pip install -e .[all,dev]
python -m pip install asv[virtualenv]
python -m asv machine --machine "SelfHostedRunner"
python -m asv run --machine "SelfHostedRunner" NEW --show-stderr -v

- name: Upload results as artifact
uses: actions/upload-artifact@v4
with:
name: asv_periodic_results
path: results

- name: Uninstall pyenv-virtualenv & python
if: always()
shell: bash
run: |
eval "$(pyenv init -)"
pyenv activate pybop-312-bench
pyenv uninstall -f $( python --version )

publish-results:
name: Push and publish results
needs: benchmarks
runs-on: ubuntu-latest
BradyPlanden marked this conversation as resolved.
Show resolved Hide resolved
steps:
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11

- name: Install asv
run: pip install asv

- name: Checkout pybop-bench repo
uses: actions/checkout@v4
with:
repository: pybop-team/pybop-bench
token: ${{ secrets.PUSH_BENCH_TOKEN }}

- name: Download results artifact
uses: actions/download-artifact@v4
with:
name: asv_periodic_results
path: new_results

- name: Copy new results and push to pybop-bench repo
env:
PUSH_BENCH_EMAIL: ${{ secrets.PUSH_BENCH_EMAIL }}
PUSH_BENCH_NAME: ${{ secrets.PUSH_BENCH_NAME }}
run: |
cp -vr new_results/* results
git config --global user.email "$PUSH_BENCH_EMAIL"
git config --global user.name "$PUSH_BENCH_NAME"
git add results
git commit -am "Add new benchmark results"
git push

- name: Publish results
run: |
asv publish
git fetch origin gh-pages:gh-pages
asv gh-pages
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -310,3 +310,6 @@ $RECYCLE.BIN/

# Output JSON files
**/fit_ecm_parameters.json

# Airspeed Velocity
*.asv/
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## Features

- [#179](https://github.com/pybop-team/PyBOP/pull/203) - Adds `asv` configuration for benchmarking and initial benchmark suite.
- [#218](https://github.com/pybop-team/PyBOP/pull/218) - Adds likelihood base class, `GaussianLogLikelihoodKnownSigma`, `GaussianLogLikelihood`, and `ProbabilityBased` cost function. As well as addition of a maximum likelihood estimation (MLE) example.
- [#185](https://github.com/pybop-team/PyBOP/pull/185) - Adds a pull request template, additional nox sessions `quick` for standard tests + docs, `pre-commit` for pre-commit, `test` to run all standard tests, `doctest` for docs.
- [#215](https://github.com/pybop-team/PyBOP/pull/215) - Adds `release_workflow.md` and updates `release_action.yaml`
Expand Down
23 changes: 23 additions & 0 deletions asv.conf.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"version": 1,
"project": "PyBOP",
"project_url": "https://github.com/pybop-team/pybop",
"repo": ".",
"build_command": [
"python -m pip install build",
"python -m build --wheel -o {build_cache_dir} {build_dir}"
],
"default_benchmark_timeout": 180,
"branches": ["179-add-airspeed-velocity-for-automated-benchmarking"],
"environment_type": "virtualenv",
"matrix": {
"req":{
"pybamm": [],
"numpy": [],
"scipy": [],
"pints": []
}
},
"build_cache_dir": ".asv/cache",
"build_dir": ".asv/build"
}
98 changes: 98 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Benchmarking Directory for PyBOP

Welcome to the benchmarking directory for PyBOP. We use `asv` (airspeed velocity) for benchmarking, which is a tool for running Python benchmarks over time in a consistent environment. This document will guide you through the setup, execution, and viewing of benchmarks.

## Quick Links

- [Airspeed Velocity (asv) Documentation](https://asv.readthedocs.io/)

## Prerequisites

Before you can run benchmarks, you need to ensure that `asv` is installed and that you have a working Python environment. It is also recommended to run benchmarks in a clean, dedicated virtual environment to avoid any side-effects from your local environment.

### Installing `asv`

You can install `asv` using `pip`. It's recommended to do this within a virtual environment:

```bash
pip install asv
```

## Setting Up Benchmarks

The `benchmarks` directory already contains a set of benchmarks for the package. To add or modify benchmarks, edit the `.py` files within this directory.

Each benchmark file should contain one or more classes with methods that `asv` will automatically recognize as benchmarks. Here's an example structure for a benchmark file:

```python
class ExampleBenchmarks:
def setup(self):
# Code to run before each benchmark method is executed
pass

def time_example_benchmark(self):
# The actual benchmark code
pass

def teardown(self):
# Code to run after each benchmark method is executed
pass
```

## Running Benchmarks

With `asv` installed and your benchmarks set up, you can now run benchmarks using the following standard `asv` commands:

### Running All Benchmarks

To run all benchmarks in your python env:

```bash
asv run --python=same
```

This will test the current state of your codebase by default. You can specify a range of commits to run benchmarks against by appending a commit range to the command, like so:

```bash
asv run <commit-hash-1>..<commit-hash-2>
```

### Running Specific Benchmarks

To run a specific benchmark, use:

```bash
asv run --bench <benchmark name>
```

### Running Benchmarks for a Specific Environment

To run benchmarks against a specific Python version:

```bash
asv run --python=same # To use the same Python version as the current environment
asv run --python=3.8 # To specify the Python version
```

## Viewing Benchmark Results

After running benchmarks, `asv` will generate results which can be viewed as a web page:

```bash
asv publish
asv preview
```

Now you can open your web browser to the URL provided by `asv` to view the results.

## Continuous Benchmarking

You can also set up `asv` for continuous benchmarking where it will track the performance over time. This typically involves integration with a continuous integration (CI) system.

For more detailed instructions on setting up continuous benchmarking, consult the [asv documentation](https://asv.readthedocs.io/en/stable/using.html#continuous-benchmarking).

## Reporting Issues

If you encounter any issues or have suggestions for improving the benchmarks, please open an issue or a pull request in the project repository.

Thank you for contributing to the performance of the package!
Empty file added benchmarks/__init__.py
Empty file.
77 changes: 77 additions & 0 deletions benchmarks/benchmark_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
import pybop
import numpy as np


class BenchmarkModel:
param_names = ["model", "parameter_set"]
params = [
[pybop.lithium_ion.SPM, pybop.lithium_ion.SPMe],
["Chen2020"],
]

def setup(self, model, parameter_set):
"""
Setup the model and problem for predict and simulate benchmarks.

Args:
model (pybop.Model): The model class to be benchmarked.
parameter_set (str): The name of the parameter set to be used.
"""
# Create model instance
self.model = model(parameter_set=pybop.ParameterSet.pybamm(parameter_set))

# Define fitting parameters
parameters = [
pybop.Parameter(
"Current function [A]",
prior=pybop.Gaussian(0.4, 0.02),
bounds=[0.2, 0.7],
initial_value=0.4,
)
]

# Generate synthetic data
sigma = 0.001
self.t_eval = np.arange(0, 900, 2)
values = self.model.predict(t_eval=self.t_eval)
corrupt_values = values["Voltage [V]"].data + np.random.normal(
0, sigma, len(self.t_eval)
)

self.inputs = {
"Current function [A]": 0.4,
}

# Create dataset
dataset = pybop.Dataset(
{
"Time [s]": self.t_eval,
"Current function [A]": values["Current [A]"].data,
"Voltage [V]": corrupt_values,
}
)

# Create fitting problem
self.problem = pybop.FittingProblem(
model=self.model, dataset=dataset, parameters=parameters, init_soc=0.5
)

def time_model_predict(self, model, parameter_set):
"""
Benchmark the predict method of the model.

Args:
model (pybop.Model): The model class being benchmarked.
parameter_set (str): The name of the parameter set being used.
"""
self.model.predict(inputs=self.inputs, t_eval=self.t_eval)

def time_model_simulate(self, model, parameter_set):
"""
Benchmark the simulate method of the model.

Args:
model (pybop.Model): The model class being benchmarked.
parameter_set (str): The name of the parameter set being used.
"""
self.problem._model.simulate(inputs=self.inputs, t_eval=self.t_eval)
Loading
Loading