Welcome - Thank you for wanting to make this project better! This section provides an overview on how the repository is structured and how to work with the codebase.
Before you dive into this guide, please read the following first:
- The Code of Conduct
- The Contributing Guide
The pygitops project uses Docker to ease setting up a consistent development environment. The Docker documentation has details on how to install Docker and install Docker Compose on your computer.
Once you have installed Docker and Docker Compose, you can execute the test suite by running the following command from your terminal:
docker-compose run --rm test
If you want to be able to execute code in the container:
docker-compose run --rm devbox
(your code here)
In the devbox environment you'll be able to enter a Python shell and import pygitops
or any dependencies.
The devbox environment also comes with git-core
- a system dependency needed to run gitpython
, our package's main dependency.
The Docker container has pdb++ installed that can be used as a debugger. (However, you are welcome to set up a different debugger if you would like.)
This allows you to easily create a breakpoint anywhere in the code.
def my_function():
breakpoint()
...
When you run your code, you will drop into an interactive pdb++
debugger.
See the documentation on pdb and pdb++ for more information.
You'll be unable to merge code unless the linting and tests pass. You can run these in your container via:
docker-compose run --rm test
This will run the same tests, linting, and code coverage that are run by the CI pipeline. The only difference is that when run locally, black
and isort
are configured to automatically correct issues they detect.
Generally we should endeavor to write tests for every feature. Every new feature branch should increase the test coverage rather than decrease it.
We use pytest as our testing framework.
To customize / override a specific testing stage, please read the documentation specific to that tool:
Setuptools is used to package the library.
setup.py
must not import anything from the package When installing from source, the user may not have the
packages' dependencies installed, and importing the package is likely to raise an ImportError
. For this reason, the
package version should be obtained without importing. This explains why setup.py
uses a regular expression to
grab the version from __init__.py
without actually importing any dependencies.
- requirements.txt - Lists all direct dependencies (packages imported by the library).
- Requirements-test.txt - Lists all direct requirements needed to run the test suite & lints.
Once the package is ready to be released, there are a few things that need to be done:
-
Run the version bump script with the appropriate part name (
major
,minor
, orpatch
). Example:docker-compose run --rm bump minor
This will update all affected files (including changelog) with the new version on whichever branch you are on
-
Create a pull request from your branch.
-
Get the pull request approved.
-
Merge the pull request to the default branch.
!!! warning Take care not to run bump script more than once!
Merging the pull request will trigger a GitHub Action that will create a new release. The creation of this new release will trigger a GitHub Action that will trigger a wheel build & a source distributions of the package and push them to PyPI.
!!! warning The action that uploads the files to PyPI will not run until a repository maintainer acknowledges that the job is ready to run. This is to keep the PyPI publishing token secure. Otherwise, any job would have access to the token.
In addition to uploading the files to PyPI, the documentation website will be updated to include the new version. If the
new version is a full release, it will be made the new latest
version.
The Continuous Integration (CI) Pipeline runs to confirm that the repository is in a good state. It will run when someone creates a pull request or when they push new commits to the branch for an existing pull request. The pipeline runs multiple different jobs that helps verify the state of the code.
This same pipeline also runs on the default branch when a maintainer merges a pull request.
The first set of jobs that run as part of the CI pipline are linters that perform static analysis on the code. This includes: MyPy, Black, Isort, Flake8, and Bandit.
The next set of jobs run the unit tests using PyTest. The pipeline runs the tests cases across each supported version of Python to ensure compatibility.
For each run of the test cases, the job will record the test results and code coverage information. The pipeline uploads the code coverage information to CodeCov to ensure that a pull request doesn't significantly reduce the total code coverage percentage or introduce a large amount of code that is untested.
The next set of jobs build the wheel distribution, installs in into a virtual environment, and then runs Python to import the library version. This works as a smoke test to ensure that the library can be packaged correctly and used. The pipeline runs the tests cases across each supported version of Python to ensure compatibility.
The remaining jobs are all related to documentation.
- A job runs each of the code examples that are used in the documentation to verify they produce the expected results.
- A job builds the documentation in strict mode so that it will fail if there are any errors. The job records the generated files so that the documentation website can be viewed in its rendered form.
- When the pipeline is running as a result of a maintainer merging a pull request to the default branch, a job runs that
publishes the current state of the documentation to as the
dev
version. This will allow users to view the state of the documentation as it has changed since a maintainer published thelatest
version.