If you'd like to contribute to PyBaMM (thanks!), please have a look at the guidelines below.
If you're already familiar with our workflow, maybe have a quick look at the pre-commit checks directly below.
Before you commit any code, please perform the following checks:
- No style issues:
$ flake8
- All tests pass:
$ python run-tests.py --unit
- The documentation builds:
$ cd docs
and then$ make clean; make html
You can even run all three at once, using $ python run-tests.py --quick
.
We use GIT and GitHub to coordinate our work. When making any kind of update, we try to follow the procedure below.
- Create an issue where new proposals can be discussed before any coding is done.
- Create a branch of this repo (ideally on your own fork), where all changes will be made
- Download the source code onto your local system, by cloning the repository (or your fork of the repository).
- Install PyBaMM with the developer options.
- Test if your installation worked, using the test script:
$ python run-tests.py --unit
.
You now have everything you need to start making changes!
- PyBaMM is developed in Python, and makes heavy use of NumPy (see also NumPy for MatLab users and Python for R users).
- Make sure to follow our coding style guidelines.
- Commit your changes to your branch with useful, descriptive commit messages: Remember these are publicly visible and should still make sense a few months ahead in time. While developing, you can keep using the GitHub issue you're working on as a place for discussion. Refer to your commits when discussing specific lines of code.
- If you want to add a dependency on another library, or re-use code you found somewhere else, have a look at these guidelines.
- Test your code!
- PyBaMM has online documentation at http://pybamm.readthedocs.io/. To make sure any new methods or classes you added show up there, please read the documentation section.
- If you added a major new feature, perhaps it should be showcased in an example notebook.
- When you feel your code is finished, or at least warrants serious discussion, run the pre-commit checks and then create a pull request (PR) on PyBaMM's GitHub page.
- Once a PR has been created, it will be reviewed by any member of the community. Changes might be suggested which you can make by simply adding new commits to the branch. When everything's finished, someone with the right GitHub permissions will merge your changes into PyBaMM master repository.
Finally, if you really, really, really love developing PyBaMM, have a look at the current project infrastructure.
To install PyBaMM with all developer options, type:
pip install -e .[dev,docs]
This will
- Install all the dependencies for PyBaMM, including the ones for documentation (docs) and development (dev).
- Tell Python to use your local pybamm files when you use
import pybamm
anywhere on your system.
PyBaMM follows the PEP8 recommendations for coding style. These are very common guidelines, and community tools have been developed to check how well projects implement them.
We use flake8 to check our PEP8 adherence. To try this on your system, navigate to the PyBaMM directory in a console and type
flake8
The configuration file
.flake8
allows us to ignore some errors. If you think this should be added or removed, please submit an issue
When you commit your changes they will be checked against flake8 automatically (see infrastructure).
We use black to automatically configure our code to adhere to PEP8. Black can be used in two ways:
- Command line: navigate to the PyBaMM directory in a console and type
black {source_file_or_directory}
- Editor: black can be configured to automatically reformat a python script each time the script is saved in an editor.
If you want to use black in your editor, you may need to change the max line length in your editor settings.
Even when code has been formatted by black, you should still make sure that it adheres to the PEP8 standard set by Flake8.
Naming is hard. In general, we aim for descriptive class, method, and argument names. Avoid abbreviations when possible without making names overly long, so mean
is better than mu
, but a class name like MyClass
is fine.
Class names are CamelCase, and start with an upper case letter, for example MyOtherClass
. Method and variable names are lower case, and use underscores for word separation, for example x
or iteration_count
.
While it's a bad idea for developers to "reinvent the wheel", it's important for users to get a reasonably sized download and an easy install. In addition, external libraries can sometimes cease to be supported, and when they contain bugs it might take a while before fixes become available as automatic downloads to PyBaMM users. For these reasons, all dependencies in PyBaMM should be thought about carefully, and discussed on GitHub.
Direct inclusion of code from other packages is possible, as long as their license permits it and is compatible with ours, but again should be considered carefully and discussed in the group. Snippets from blogs and stackoverflow can often be included without attribution, but if they solve a particularly nasty problem (or are very hard to read) it's often a good idea to attribute (and document) them, by making a comment with a link in the source code.
On the other hand... We do want to compare several tools, to generate documentation, and to speed up development. For this reason, the dependency structure is split into 4 parts:
- Core PyBaMM: A minimal set, including things like NumPy, SciPy, etc. All infrastructure should run against this set of dependencies, as well as any numerical methods we implement ourselves.
- Extras: Other inference packages and their dependencies. Methods we don't want to implement ourselves, but do want to provide an interface to can have their dependencies added here.
- Documentation generating code: Everything you need to generate and work on the docs.
- Development code: Everything you need to do PyBaMM development (so all of the above packages, plus flake8 and other testing tools).
Only 'core pybamm' is installed by default. The others have to be specified explicitly when running the installation command.
We use Matplotlib in PyBaMM, but with two caveats:
First, Matplotlib should only be used in plotting methods, and these should never be called by other PyBaMM methods. So users who don't like Matplotlib will not be forced to use it in any way. Use in notebooks is OK and encouraged.
Second, Matplotlib should never be imported at the module level, but always inside methods. For example:
def plot_great_things(self, x, y, z):
import matplotlib.pyplot as pl
...
This allows people to (1) use PyBaMM without ever importing Matplotlib and (2) configure Matplotlib's back-end in their scripts, which must be done before e.g. pyplot
is first imported.
All code requires testing. We use the unittest package for our tests. (These tests typically just check that the code runs without error, and so, are more debugging than testing in a strict sense. Nevertheless, they are very useful to have!)
To run quick tests, type
python run-tests.py --unit
Every new feature should have its own test. To create ones, have a look at the test
directory and see if there's a test for a similar method. Copy-pasting this is a good way to start.
Next, add some simple (and speedy!) tests of your main features. If these run without exceptions that's a good start! Next, check the output of your methods using any of these assert methods.
The tests are divided into unit
tests, whose aim is to check individual bits of code (e.g. discretising a gradient operator, or solving a simple ODE), and integration
tests, which check how parts of the program interact as a whole (e.g. solving a full model).
If you want to check integration tests as well as unit tests, type
python run-tests.py --unit --folder all
When you commit anything to PyBaMM, these checks will also be run automatically (see infrastructure).
To test all example scripts and notebooks, type
python run-tests.py --examples
If notebooks fail because of changes to pybamm, it can be a bit of a hassle to debug. In these cases, you can create a temporary export of a notebook's Python content using
python run-tests.py --debook examples/notebooks/notebook-name.ipynb script.py
Often, the code you write won't pass the tests straight away, at which stage it will become necessary to debug. The key to successful debugging is to isolate the problem by finding the smallest possible example that causes the bug. In practice, there are a few tricks to help you to do this, which we give below. Once you've isolated the issue, it's a good idea to add a unit test that replicates this issue, so that you can easily check whether it's been fixed, and make sure that it's easily picked up if it crops up again. This also means that, if you can't fix the bug yourself, it will be much easier to ask for help (by opening a bug-report issue).
- Run individual test scripts instead of the whole test suite:
python tests/unit/path/to/test
You can also run an individual test from a particular script, e.g.
python tests/unit/test_quick_plot.py TestQuickPlot.test_failure
If you want to run several, but not all, the tests from a script, you can restrict which tests are run from a particular script by using the skipping decorator:
@unittest.skip("")
def test_bit_of_code(self):
...
or by just commenting out all the tests you don't want to run 2. Set break points, either in your IDE or using the python debugging module. To use the latter, add the following line where you want to set the break point
import ipdb; ipdb.set_trace()
This will start the Python interactive debugger. If you want to be able to use magic commands from ipython
, such as %timeit
, then set
from IPython import embed; embed(); import ipdb; ipdb.set_trace()
at the break point instead.
Figuring out where to start the debugger is the real challenge. Some good ways to set debugging break points are:
a. Try-except blocks. Suppose the line do_something_complicated()
is raising a ValueError
. Then you can put a try-except block around that line as:
try:
do_something_complicated()
except ValueError:
import ipdb; ipdb.set_trace()
This will start the debugger at the point where the ValueError
was raised, and allow you to investigate further. Sometimes, it is more informative to put the try-except block further up the call stack than exactly where the error is raised.
b. Warnings. If functions are raising warnings instead of errors, it can be hard to pinpoint where this is coming from. Here, you can use the warnings
module to convert warnings to errors:
import warnings
warnings.simplefilter("error")
Then you can use a try-except block, as in a., but with, for example, RuntimeWarning
instead of ValueError
.
c. Stepping through the expression tree. Most calls in PyBaMM are operations on expression trees. To view an expression tree in ipython, you can use the render
command:
expression_tree.render()
You can then step through the expression tree, using the children
attribute, to pinpoint exactly where a bug is coming from. For example, if expression_tree.jac(y)
is failing, you can check expression_tree.children[0].jac(y)
, then expression_tree.children[0].children[0].jac(y)
, etc.
3. To isolate whether a bug is in a model, its jacobian or its simplified version, you can set the use_jacobian
and/or use_simplify
attributes of the model to False
(they are both True
by default for most models).
4. If a model isn't giving the answer you expect, you can try comparing it to other models. For example, you can investigate parameter limits in which two models should give the same answer by setting some parameters to be small or zero. The StandardOutputComparison
class can be used to compare some standard outputs from battery models.
5. To get more information about what is going on under the hood, and hence understand what is causing the bug, you can set the logging level to DEBUG
by adding the following line to your test or script:
pybamm.set_logging_level("DEBUG")
- In models that inherit from
pybamm.BaseBatteryModel
(i.e. any battery model), you can useself.process_parameters_and_discretise
to process a symbol and see what it will look like.
Sometimes, a bit of code will take much longer than you expect to run. In this case, you can set
from IPython import embed; embed(); import ipdb; ipdb.set_trace()
as above, and then use some of the profiling tools. In order of increasing detail:
- Simple timer. In ipython, the command
%time command_to_time()
tells you how long the line command_to_time()
takes. You can use %timeit
instead to run the command several times and obtain more accurate timings.
2. Simple profiler. Using %prun
instead of %time
will give a brief profiling report
3. Detailed profiler. You can install the detailed profiler snakeviz
through pip:
pip install snakeviz
and then, in ipython, run
%load_ext snakeviz
%snakeviz command_to_time()
This will open a window in your browser with detailed profiling information.
PyBaMM is documented in several ways.
First and foremost, every method and every class should have a docstring that describes in plain terms what it does, and what the expected input and output is.
These docstrings can be fairly simple, but can also make use of reStructuredText, a markup language designed specifically for writing technical documentation. For example, you can link to other classes and methods by writing :class:`pybamm.Model`
and :meth:`run()`
.
In addition, we write a (very) small bit of documentation in separate reStructuredText files in the docs
directory. Most of what these files do is simply import docstrings from the source code. But they also do things like add tables and indexes. If you've added a new class to a module, search the docs
directory for that module's .rst
file and add your class (in alphabetical order) to its index. If you've added a whole new module, copy-paste another module's file and add a link to your new file in the appropriate index.rst
file.
Using Sphinx the documentation in docs
can be converted to HTML, PDF, and other formats. In particular, we use it to generate the documentation on http://pybamm.readthedocs.io/
To test and debug the documentation, it's best to build it locally. To do this, make sure you have the relevant dependencies installed (see installation), navigate to your PyBaMM directory in a console, and then type:
cd docs
make clean
make html
Next, open a browser, and navigate to your local PyBaMM directory (by typing the path, or part of the path into your location bar). Then have a look at <your pybamm path>/docs/build/html/index.html
.
Major PyBaMM features are showcased in Jupyter notebooks stored in the examples directory. Which features are "major" is of course wholly subjective, so please discuss on GitHub first!
All example notebooks should be listed in examples/README.md. Please follow the (naming and writing) style of existing notebooks where possible.
Where possible, notebooks are tested daily. A list of slow notebooks (which time-out and fail tests) is maintained in .slow-books
, these notebooks will be excluded from daily testing.
Installation of PyBaMM and dependencies is handled via setuptools
Configuration files:
setup.py
Note that this file must be kept in sync with the version number in pybamm/init.py.
All committed code is tested using Travis CI, tests are published on https://travis-ci.org/pybamm-team/PyBaMM.
Configuration files:
.travis.yaml
For every commit, Travis runs unit tests, integration tests, doc tests, flake8 and notebook tests.
Code coverage (how much of our code is actually seen by the (linux) unit tests) is tested using Codecov, a report is visible on https://codecov.io/gh/pybamm-team/PyBaMM.
Configuration files:
.coveragerc
Documentation is built using https://readthedocs.org/ and published on http://pybamm.readthedocs.io/.
Editable notebooks are made available using Binder at https://mybinder.org/v2/gh/pybamm-team/PyBaMM/master.
Configuration files:
postBuild
GitHub does some magic with particular filenames. In particular:
- The first page people see when they go to our GitHub page displays the contents of README.md, which is written in the Markdown format. Some guidelines can be found here.
- The license for using PyBaMM is stored in LICENSE, and automatically linked to by GitHub.
- This file, CONTRIBUTING.md is recognised as the contribution guidelines and a link is automatically displayed when new issues or pull requests are created.
This CONTRIBUTING.md file, along with large sections of the code infrastructure, was copied from the excellent Pints GitHub repo