From ff38736355659a45fac612eeaf34121efe95a8bb Mon Sep 17 00:00:00 2001 From: Floris-Jan Willemsen Date: Tue, 3 Oct 2023 16:50:59 +0200 Subject: [PATCH] Searchspace improvements and project meta modernization (#214) * Implemented PySMT in Kernel Tuner * Added mapping of parameter names to integers for better perfomance * Improved tests and compatibility for parameter mapping * Switch from setup.py/cfg to pyproject.toml * Added linting information, VS Code settings and recommendations * Always return compiled functions as a list, application of formatter * Improvements to string to restriction parsing * Major speedup due to restriction splitting with parameter usage detection * Refactored tests for parsing restrictions due to changes * Minor improvements to an assert and a testcase * Searchspace list to NumPy conversion only happens if needed * Searchspace objects can now be initialized from ATF logfiles (caches), unified searchspace builder returns * Added safeguard to prevent changing ATF block size names after compilation stage * Fixed Constraint conversions * Changed to non-forwardchecking by default, moved MaxProdConstraint to constraint * Parsing of common operations to python-constraint built-in Constraints * Fixed an issue with converting restrictions to Constraints * Enabled converting to Constraint with '>' and '<' as well, added use of MinProdConstraint * Added mapping of '>' to '>=' and '<' to '<=' for single-variable restriction * Return None on trying to convert to numeric Constraint * Automatic transformation of restrictions with multiple comparators to multiple restrictions with a single comparator, increasing the chance of converting to a built-in restriction * Added support for PySMT on generated searchspaces * Several improvements and fixes for PySMT builder, especially for preserving order * Fixed an issue where restrictions would not be parsed to specific constraints, added tests for conversion to constraints, added requirement for Python version * Removed redundant parameter mapping * Minor fixes for testing * Made scikit-opt (skopt) optional in strategy bayes_opt * Changed to Poetry and pyproject.toml * Added tests for pyproject.toml * Updated changelog to reflect changes, updated VS Code settings * Updated documentation to use pyproject file metadata, other minor updates * Minor bugfixes based on SonarCloud * Re-added support for Python 3.8, updated dependencies accordingly * Updated GitHub Action workflows to use Poetry and Nox * Removed unnecessary pull_request trigger for howfairis, added manual trigger for build & test * Updated workflows with new Setup Nox 2 and new checkout * Removed Windows from test OSes * Updated publishing workflow, expanded TODOs in changelog * Finished PyPI publication workflow with Poetry * Fixed errors and warnings with the docs, as well as compatiblity issues * Updated dependencies * Updated documentation for installation, development setup etc. * Minor updates: include notebook files in package, version bump * Removed usage of OrderedDict as per issue #209, minor changes to tests for this, reogranized imports and made sure some appearantly unused imports are not automatically removed * Updated test workflow to include upload to CodeCov for each OS, improved Noxfile so only the last test generates coverage report, added PyPI classifiers * Disable automatic upload to CodeCov for now * Solved issue #139: Reimplemented Latin Hypercube Sampling with SciPy * Improved Nox with optional dependencies for CUDA, HIP, OpenCL and arguments to disable these * Updated Contributing and Installation guidelines, specifically added development environment setup instructions, also added code syntax highlighting for Sphinx * Minor changes to dependencies and environment setup * Minor changes to dependencies and environment setup * Added bruteforce solver to Searchspace object, improved tests for Searchspace * Solved a bug in the constraints parsing that caused wrong searchspace outcomes, added and expanded searchspace tests, resolved warnings * Added missing nox-poetry to test dependencies * Added nox-poetry to GitHub test Action * Added Poetry setup to Github Test Action * Added extensive development environment setup instructions, updated installation documentation * Add an exception for the nvml_ parameter check if in simulation mode * Added optional additional (non-dependency) installation to Noxfile, with CUDA version differentiation. Added a tolerance to the energy power frequency model test. * Fixed additional test input argument * Improved detection of CUDA version using NVCC * Improved automatic selection of cupy prebuilt version, from most exact to most general, and improved warnings * Set the default environment used by Nox using a file * Updated the documentation for the Noxenv file and other minor improvements * Updated VS code settings for testing from VS code, updated documentation * Fixed HIP import error, made backend import error messages point to documentation * Restored accidentally removed import * Added an option for Nox to remove the other environment caches before each session is ran, and to clean up temporary files * Temporarily skip broken HIP tests as per issue #217, avoided error in Noxfile when no temporary files are present * Improved Noxfile functionality for removing environments after use, and documentation on this feature --- .github/workflows/cffconvert.yml | 29 +- .github/workflows/docs-on-release.yml | 76 +- .github/workflows/docs.yml | 64 +- .github/workflows/publish-python-package.yml | 56 + .github/workflows/python-app.yml | 39 - .github/workflows/python-publish.yml | 40 - .github/workflows/test-python-package.yml | 42 + .../workflows/update-fair-software-badge.yml | 55 +- .gitignore | 10 + .vscode/extensions.json | 14 + .vscode/settings.json | 30 + CHANGELOG.md | 13 +- CONTRIBUTING.rst | 91 +- INSTALL.rst | 72 +- README.rst | 43 +- doc/source/conf.py | 210 +- doc/source/design.rst | 60 +- doc/source/docutils.conf | 2 + doc/source/matrix_multiplication.ipynb | 6 +- kernel_tuner/__init__.py | 4 +- kernel_tuner/backends/cupy.py | 39 +- kernel_tuner/backends/hip.py | 77 +- kernel_tuner/backends/nvcuda.py | 35 +- kernel_tuner/backends/opencl.py | 36 +- kernel_tuner/backends/pycuda.py | 43 +- kernel_tuner/energy/energy.py | 28 +- kernel_tuner/file_utils.py | 28 +- kernel_tuner/interface.py | 385 +- kernel_tuner/observers/hip.py | 6 +- kernel_tuner/observers/nvml.py | 71 +- kernel_tuner/runners/sequential.py | 15 +- kernel_tuner/searchspace.py | 611 +++- kernel_tuner/strategies/basinhopping.py | 14 +- kernel_tuner/strategies/bayes_opt.py | 399 +- kernel_tuner/strategies/common.py | 28 +- kernel_tuner/strategies/diff_evo.py | 8 +- kernel_tuner/strategies/dual_annealing.py | 11 +- kernel_tuner/strategies/firefly_algorithm.py | 18 +- kernel_tuner/strategies/genetic_algorithm.py | 21 +- kernel_tuner/strategies/greedy_ils.py | 6 +- kernel_tuner/strategies/greedy_mls.py | 6 +- kernel_tuner/strategies/minimize.py | 22 +- kernel_tuner/strategies/mls.py | 6 +- kernel_tuner/strategies/ordered_greedy_mls.py | 6 +- kernel_tuner/strategies/pso.py | 9 +- kernel_tuner/strategies/random_sample.py | 7 +- .../strategies/simulated_annealing.py | 10 +- kernel_tuner/util.py | 584 ++- noxfile.py | 181 + poetry.lock | 3256 +++++++++++++++++ pyproject.toml | 140 + setup.cfg | 2 - setup.py | 99 - test/strategies/test_bayesian_optimization.py | 28 +- test/strategies/test_common.py | 9 +- test/strategies/test_genetic_algorithm.py | 6 +- test/strategies/test_strategies.py | 7 +- test/test_common.py | 13 +- test/test_cuda_functions.py | 11 +- test/test_cupy_functions.py | 4 +- test/test_energy.py | 8 +- test/test_file_utils.py | 11 +- test/test_hip_functions.py | 27 +- test/test_hyper.py | 6 +- test/test_observers.py | 3 +- test/test_opencl_functions.py | 8 +- test/test_runners.py | 9 +- test/test_searchspace.py | 223 +- test/test_toml_file.py | 72 + test/test_util_functions.py | 207 +- 70 files changed, 6037 insertions(+), 1778 deletions(-) create mode 100644 .github/workflows/publish-python-package.yml delete mode 100644 .github/workflows/python-app.yml delete mode 100644 .github/workflows/python-publish.yml create mode 100644 .github/workflows/test-python-package.yml create mode 100644 .vscode/extensions.json create mode 100755 .vscode/settings.json create mode 100644 doc/source/docutils.conf create mode 100644 noxfile.py create mode 100644 poetry.lock create mode 100644 pyproject.toml delete mode 100644 setup.cfg delete mode 100644 setup.py create mode 100644 test/test_toml_file.py diff --git a/.github/workflows/cffconvert.yml b/.github/workflows/cffconvert.yml index 707a71c4b..416339a9a 100644 --- a/.github/workflows/cffconvert.yml +++ b/.github/workflows/cffconvert.yml @@ -1,19 +1,20 @@ -name: cffconvert +# This workflow validates the citation file in the repository + +name: Citation file validation on: - push: - paths: - - CITATION.cff + push: + paths: + - CITATION.cff jobs: - validate: - name: "validate" - runs-on: ubuntu-latest - steps: - - name: Check out a copy of the repository - uses: actions/checkout@v2 + validate: + runs-on: ubuntu-latest + steps: + - name: Check out a copy of the repository + uses: actions/checkout@v4 - - name: Check whether the citation metadata from CITATION.cff is valid - uses: citation-file-format/cffconvert-github-action@2.0.0 - with: - args: "--validate" + - name: Check whether the citation metadata from CITATION.cff is valid + uses: citation-file-format/cffconvert-github-action@2.0.0 + with: + args: "--validate" diff --git a/.github/workflows/docs-on-release.yml b/.github/workflows/docs-on-release.yml index 5a2da1221..9ae341cd4 100644 --- a/.github/workflows/docs-on-release.yml +++ b/.github/workflows/docs-on-release.yml @@ -1,42 +1,44 @@ -name: Create versioned documentation on release +name: Build versioned documentation on release on: - release: - types: [published] + release: + types: [published] - workflow_dispatch: + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: jobs: - build: - environment: dev_environment - runs-on: ubuntu-latest - steps: - - name: Checkout - uses: actions/checkout@master - with: - fetch-depth: 0 # otherwise, you will failed to push refs to dest repo - - name: Set env - run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV - - name: Install dependencies - run: | - sudo apt-get update; - sudo apt-get install pandoc - python -m pip install --upgrade pip - pip install .[doc] - - name: Build and Commit - uses: sphinx-notes/pages@v2 - with: - documentation_path: doc/source - target_path: ${{ env.RELEASE_VERSION }} - - name: Redirect stable to new release - run: | - echo "Redirecting stable to newly released version " $RELEASE_VERSION - rm -rf stable - ln -s $RELEASE_VERSION stable - git add stable - git commit -m "redirect stable to new version $RELEASE_VERSION" - - name: Push changes - uses: ad-m/github-push-action@master - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - branch: gh-pages + build: + environment: dev_environment + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + with: + fetch-depth: 0 # otherwise, you will failed to push refs to dest repo + - name: Set env + run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV + - name: Install dependencies + run: | + sudo apt-get update; + sudo apt-get install pandoc + python -m pip install --upgrade pip + pip install poetry + poetry install --with docs + - name: Build and Commit + uses: sphinx-notes/pages@v2 # NOTE when switching to v3, export dependencies to requirements.txt in pyproject.toml: `poetry export --with docs --without-hashes --format=requirements.txt > docs/requirements.txt` + with: + documentation_path: doc/source + target_path: ${{ env.RELEASE_VERSION }} + - name: Redirect stable to new release + run: | + echo "Redirecting stable to newly released version " $RELEASE_VERSION + rm -rf stable + ln -s $RELEASE_VERSION stable + git add stable + git commit -m "redirect stable to new version $RELEASE_VERSION" + - name: Push changes + uses: ad-m/github-push-action@master + with: + github_token: ${{ secrets.GITHUB_TOKEN }} + branch: gh-pages diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index ae99818d8..19e49550c 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -1,40 +1,36 @@ -# This is a basic workflow to help you get started with Actions name: Build documentation -# Controls when the workflow will run on: - # Triggers the workflow on push or pull request events but only for the master branch - push: - branches: [ master ] + push: + branches: [master] - # Allows you to run this workflow manually from the Actions tab - workflow_dispatch: + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: -# A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: - # This workflow contains a single job called "build" - build: - - # The type of runner that the job will run on - runs-on: ubuntu-latest - steps: - - name: Checkout - uses: actions/checkout@master - with: - fetch-depth: 0 # otherwise, you will failed to push refs to dest repo - - name: Install dependencies - run: | - sudo apt-get update; - sudo apt-get install pandoc - python -m pip install --upgrade pip - pip install .[doc] - - name: Build and Commit - uses: sphinx-notes/pages@v2 - with: - documentation_path: doc/source - target_path: latest - - name: Push changes - uses: ad-m/github-push-action@master - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - branch: gh-pages + # This workflow contains a single job called "build" + build: + # The type of runner that the job will run on + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + with: + fetch-depth: 0 # otherwise, you will failed to push refs to dest repo + - name: Install dependencies + run: | + sudo apt-get update; + sudo apt-get install pandoc + python -m pip install --upgrade pip + pip install poetry + poetry install --with docs + - name: Build and Commit + uses: sphinx-notes/pages@v2 # NOTE when switching to v3, export dependencies to requirements.txt in pyproject.toml: `poetry export --with docs --without-hashes --format=requirements.txt > docs/requirements.txt` + with: + documentation_path: doc/source + target_path: latest + - name: Push changes + uses: ad-m/github-push-action@master + with: + github_token: ${{ secrets.GITHUB_TOKEN }} + branch: gh-pages diff --git a/.github/workflows/publish-python-package.yml b/.github/workflows/publish-python-package.yml new file mode 100644 index 000000000..c144987c6 --- /dev/null +++ b/.github/workflows/publish-python-package.yml @@ -0,0 +1,56 @@ +# This workflow checks out a new release, builds it as a package (source and wheel) and publishes it to PyPI. + +name: Publish Package + +# Controls when the workflow will run +on: + # Workflow will run when a release has been published for the package + release: + types: + - published + + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: + +jobs: + build_and_publish_as_package: + name: Package and upload release to PyPI + runs-on: ubuntu-latest + environment: + name: pypi + url: https://pypi.org/p/kernel_tuner + permissions: + id-token: write # IMPORTANT: this permission is mandatory for trusted publishing + steps: + - uses: actions/checkout@v4 + - name: Setup Python + uses: actions/setup-python@v4 + with: + python-version: "3.11" + - name: Setup Poetry + uses: Gr1N/setup-poetry@v8 + - name: Build the source distribution and pure-Python wheel + run: | + poetry install + poetry build + ls ./dist + - name: Check that the number of wheels is as expected and there is one source distribution + run: | + SOURCES_COUNT=$(ls -lR ./dist/*.tar.gz | wc -l) + echo "Number of source distributions: $SOURCES_COUNT" + if [ "$SOURCES_COUNT" -ne 1 ]; then + echo "::error::Number of source distributions $SOURCES_COUNT not equal to 1" + exit 1; + fi + + EXPECTED_WHEELS_COUNT=1 + WHEELS_COUNT=$(ls -lR ./dist/*.whl | wc -l) + echo "Number of wheel distributions: $WHEELS_COUNT" + if [ "$WHEELS_COUNT" -ne "$EXPECTED_WHEELS_COUNT" ]; then + echo "::error::Number of wheel distributions $WHEELS_COUNT not equal to $EXPECTED_WHEELS_COUNT" + exit 1; + fi + - name: Publish package distributions to PyPI + uses: pypa/gh-action-pypi-publish@release/v1 + with: + skip-existing: true diff --git a/.github/workflows/python-app.yml b/.github/workflows/python-app.yml deleted file mode 100644 index f7b4c818f..000000000 --- a/.github/workflows/python-app.yml +++ /dev/null @@ -1,39 +0,0 @@ -# This workflow will install Python dependencies, run tests and lint with a single version of Python -# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions - -name: build - -on: - push: - branches: [ master ] - pull_request: - branches: [ master ] - - # Allows you to run this workflow manually from the Actions tab - workflow_dispatch: - -jobs: - build: - - runs-on: ${{ matrix.os }} - strategy: - fail-fast: false - matrix: - python-version: [3.8, 3.9, '3.10'] - os: [ubuntu-latest, macOS-latest] - - steps: - - uses: actions/checkout@v2 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v2 - with: - python-version: ${{ matrix.python-version }} - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install flake8 pytest - if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - pip install .[dev] - - name: Test with pytest - run: | - pytest -v test diff --git a/.github/workflows/python-publish.yml b/.github/workflows/python-publish.yml deleted file mode 100644 index ef625482e..000000000 --- a/.github/workflows/python-publish.yml +++ /dev/null @@ -1,40 +0,0 @@ -# This workflow will upload a Python Package using Twine when a release is created -# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries - -# This workflow uses actions that are not certified by GitHub. -# They are provided by a third-party and are governed by -# separate terms of service, privacy policy, and support -# documentation. - -name: Upload Python Package - -on: - release: - types: [published] - -permissions: - contents: read - -jobs: - deploy: - - runs-on: ubuntu-latest - - steps: - - uses: actions/checkout@v3 - - name: Set up Python - uses: actions/setup-python@v3 - with: - python-version: '3.x' - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install -U build twine - - name: Build package - run: python -m build --wheel - - name: Publish package - uses: pypa/gh-action-pypi-publish@release/v1.5 - with: - user: __token__ - password: ${{ secrets.PYPI_API_TOKEN }} - verbose: true diff --git a/.github/workflows/test-python-package.yml b/.github/workflows/test-python-package.yml new file mode 100644 index 000000000..86fbd3d40 --- /dev/null +++ b/.github/workflows/test-python-package.yml @@ -0,0 +1,42 @@ +# This workflow will use Nox to run tests and lint for the supported Python versions, and upload the test coverage data. + +name: Test + +on: + push: + branches: + - master + - release/* + pull_request: + branches: + - master + + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: + +jobs: + build: + name: Test on ${{ matrix.os }} with all supported Python versions + runs-on: ${{ format('{0}-latest', matrix.os) }} # "-latest" is added here so we can use OS in the format expected by CodeCov + + strategy: + matrix: + os: [ubuntu, macos] + + steps: + - uses: actions/checkout@v4 + - name: Setup Nox + uses: fjwillemsen/setup-nox2@v3.0.0 + - name: Setup Poetry + uses: Gr1N/setup-poetry@v8 + - name: Run tests with Nox + run: | + pip install nox-poetry + nox -- skip-gpu + # - name: Upload Coverage report to CodeCov + # uses: codecov/codecov-action@v3 + # with: + # token: ${{ secrets.CODECOV_TOKEN }} + # files: ./coverage + # os: ${{ matrix.os }} + # fail_ci_if_error: false # option to Specify if CI pipeline should fail when Codecov runs into errors during upload diff --git a/.github/workflows/update-fair-software-badge.yml b/.github/workflows/update-fair-software-badge.yml index b0b6e97d8..7dbb34a35 100644 --- a/.github/workflows/update-fair-software-badge.yml +++ b/.github/workflows/update-fair-software-badge.yml @@ -1,34 +1,31 @@ -name: fair-software +name: FAIR software badge creation on: - push: - branches: [ master ] - pull_request: - branches: [ master ] + push: + branches: [master] - # Allows you to run this workflow manually from the Actions tab - workflow_dispatch: + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: jobs: - verify: - name: "fair-software badge check" - runs-on: ubuntu-latest - steps: - - - name: Checkout repo - uses: actions/checkout@v2 - - - uses: benvanwerkhoven/howfairis-github-action@main - name: Measure compliance with fair-software.eu recommendations - env: - PYCHARM_HOSTED: "Trick colorama into displaying colored output" - with: - MY_REPO_URL: "https://github.com/${{ github.repository }}" - - - name: Commit changes - uses: EndBug/add-and-commit@v9 - with: - author_name: GitHub actions user - author_email: action@github.com - message: 'Update README' - add: 'README.*' + verify: + name: "fair-software badge check" + runs-on: ubuntu-latest + steps: + - name: Checkout repo + uses: actions/checkout@v4 + + - uses: benvanwerkhoven/howfairis-github-action@main + name: Measure compliance with fair-software.eu recommendations + env: + PYCHARM_HOSTED: "Trick colorama into displaying colored output" + with: + MY_REPO_URL: "https://github.com/${{ github.repository }}" + + - name: Commit changes + uses: EndBug/add-and-commit@v9 + with: + author_name: GitHub actions user + author_email: action@github.com + message: "Update README" + add: "README.*" diff --git a/.gitignore b/.gitignore index ffb292e58..37e18d801 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,7 @@ +### Project ### +noxenv.txt + +### Python ### *.pyc __pycache__ doc/build/* @@ -17,6 +21,12 @@ examples/cuda/output deploy_key *.mod temp_*.* +.python-version +.nox + +### Visual Studio Code ### +!.vscode/settings.json +!.vscode/extensions.json ### macOS ### # General diff --git a/.vscode/extensions.json b/.vscode/extensions.json new file mode 100644 index 000000000..035235be3 --- /dev/null +++ b/.vscode/extensions.json @@ -0,0 +1,14 @@ +{ + // See https://go.microsoft.com/fwlink/?LinkId=827846 to learn about workspace recommendations. + // Extension identifier format: ${publisher}.${name}. Example: vscode.csharp + // List of extensions which should be recommended for users of this workspace. + "recommendations": [ + "ms-python.python", + "ms-python.black-formatter", + "charliermarsh.ruff", + "bungcip.better-toml", + "njpwerner.autodocstring", + ], + // List of extensions recommended by VS Code that should not be recommended for users of this workspace. + "unwantedRecommendations": [] +} \ No newline at end of file diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100755 index 000000000..3a4d473dd --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,30 @@ +{ + "[json]": { + "editor.defaultFormatter": "vscode.json-language-features" + }, + "[jsonc]": { + "editor.defaultFormatter": "vscode.json-language-features" + }, + "[python]": { + "editor.defaultFormatter": "ms-python.black-formatter", + "editor.formatOnType": true, + "editor.formatOnSave": false, + "editor.codeActionsOnSave": { + "source.fixAll": true, + "source.organizeImports": true, + } + }, + "black-formatter.args": [ + "--config=pyproject.toml" + ], + "ruff.args": [ + "--config=pyproject.toml" + ], + "autoDocstring.docstringFormat": "google-notypes", + "esbonio.sphinx.confDir": "", + "python.testing.pytestArgs": [ + "test" + ], + "python.testing.unittestEnabled": false, + "python.testing.pytestEnabled": true, +} diff --git a/CHANGELOG.md b/CHANGELOG.md index 5d63717ad..a9cbc75dc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,7 +2,18 @@ All notable changes to this project will be documented in this file. This project adheres to [Semantic Versioning](http://semver.org/). -## Unreleased +## [1.0.0] - Unreleased +- Major speedup due to new parser and using revamped python-constraint for searchspace building +- Implemented ability to use `PySMT` and `ATF` for searchspace building +- Added Poetry for dependency and build management +- Switched from `setup.py` and `setup.cfg` to `pyproject.toml` for centralized metadata, added relevant tests +- Updated GitHub Action workflows to use Poetry +- Updated dependencies, most notably NumPy is no longer version-locked as scikit-opt is no longer a dependency +- Documentation now uses `pyproject.toml` metadata, minor fixes and changes to be compatible with updated dependencies +- Set up Nox for testing on all supported Python versions in isolated environments +- Added linting information, VS Code settings and recommendations +- Discontinued use of `OrderedDict`, as all dictionaries in the Python versions used are already ordered +- Dropped Python 3.7 support ## [0.4.5] - 2023-06-01 ### Added diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 6e977d4e5..12ab0aa1b 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -2,6 +2,9 @@ Contribution guide ================== Thank you for considering to contribute to Kernel Tuner! +.. role:: bash(code) + :language: bash + Reporting Issues ---------------- Not all contributions are code, creating an issue also helps us to improve. When you create an issue about a problem, please ensure the following: @@ -9,66 +12,102 @@ Not all contributions are code, creating an issue also helps us to improve. When * Describe what you expected to happen. * If possible, include a minimal example to help us reproduce the issue. * Describe what actually happened, including the output of any errors printed. -* List the version of Python, CUDA or OpenCL, and C compiler, if applicable. +* List the version of Python, CUDA or OpenCL, and C compiler, if applicable. Contributing Code ----------------- For contributing code to Kernel Tuner please select an issue to work on or create a new issue to propose a change or addition. For significant changes, it is required to first create an issue and discuss the proposed changes. Then fork the repository, create a branch, one per change or addition, and create a pull request. -Kernel Tuner follows the Google Python style guide, with Sphinxdoc docstrings for module public functions. Please use `pylint` to check your Python changes. +Kernel Tuner follows the Google Python style guide, with Sphinxdoc docstrings for module public functions. Before creating a pull request please ensure the following: -* You have written unit tests to test your additions and all unit tests pass +* You are working in an up-to-date development environment +* You have written unit tests to test your additions and all unit tests pass (run :bash:`nox`). If you do not have the required hardware, you can run :bash:`nox -- skip-gpu`, or :bash:`skip-cuda`, :bash:`skip-hip`, :bash:`skip-opencl`. * The examples still work and produce the same (or better) results -* The code is compatible with Python 3.5 or newer -* You have run `pylint` to check your code -* An entry about the change or addition is created in CHANGELOG.md +* An entry about the change or addition is created in :bash:`CHANGELOG.md` * Any matching entries in the roadmap.md are updated/removed If you are in doubt on where to put your additions to the Kernel Tuner, please have look at the `design documentation `__, or discuss it in the issue regarding your additions. -Development setup ------------------ -You can install the packages required to run the tests using: - -.. code-block:: bash - pip install -e .[dev] +Development environment +----------------------- +The following steps help you set up a development environment. + +Local setup +^^^^^^^^^^^ +Steps with :bash:`sudo` access (e.g. on a local device): + +#. Clone the git repository to the desired location: :bash:`git clone https://github.com/KernelTuner/kernel_tuner.git`, and :bash:`cd` to it. +#. Install `pyenv `__: :bash:`curl https://pyenv.run | bash` (remember to add the output to :bash:`.bash_profile` and :bash:`.bashrc` as specified). + * [Optional] setup a local virtual environment in the folder: :bash:`pyenv virtualenv kerneltuner` (or whatever environment name you prefer). +#. Install the required Python versions: :bash:`pyenv install 3.8 3.9 3.10 3.11`. +#. Set the Python versions so they can be found: :bash:`pyenv global 3.8 3.10 3.11` (replace :bash:`global` with :bash:`local` when using the virtualenv). +#. `Install Poetry `__: :bash:`curl -sSL https://install.python-poetry.org | python3 -`. +#. Make sure that non-Python dependencies are installed if applicable, such as CUDA, OpenCL or HIP. This is described in `Installation `__. +#. Install the project, dependencies and extras: :bash:`poetry install --with test,docs -E cuda -E opencl -E hip`, leaving out :bash:`-E cuda`, :bash:`-E opencl` or :bash:`-E hip` if this does not apply on your system. To go all-out, use :bash:`--all-extras` + * Depending on the environment, it may be necessary or convenient to install extra packages such as :bash:`cupy-cuda11x` / :bash:`cupy-cuda12x`, and :bash:`cuda-python`. These are currently not defined as dependencies for kernel-tuner, but can be part of tests. + * Do not forget to make sure the paths are set correctly. If you're using CUDA, the desired CUDA version should be in :bash:`$PATH`, :bash:`$LD_LIBARY_PATH` and :bash:`$CPATH`. +#. Check if the environment is setup correctly by running :bash:`pytest`. All tests should pass, except if one or more extras has been left out in the previous step, then these tests will skip gracefully. + + +Cluster setup +^^^^^^^^^^^^^ +Steps without :bash:`sudo` access (e.g. on a cluster): + +#. Clone the git repository to the desired location: :bash:`git clone https://github.com/KernelTuner/kernel_tuner.git`. +#. Install Conda with `Mamba `__ (for better performance) or `Miniconda `__ (for traditional minimal Conda). + * [Optional] both Mamba and Miniconda can be automatically activated via :bash:`~/.bashrc`. Do not forget to add these (usually mentioned at the end of the installation). + * Exit the shell and re-enter to make sure Conda is available, :bash:`cd` to the kernel tuner directory. + * [Optional] update Conda if available before continuing: :bash:`conda update -n base -c conda-forge conda`. +#. Setup a virtual environment: :bash:`conda create --name kerneltuner python=3.11` (or whatever Python version and environment name you prefer). +#. Activate the virtual environment: :bash:`conda activate kerneltuner`. + * [Optional] to use the correct environment by default, execute :bash:`conda config --set auto_activate_base false`, and add `conda activate kerneltuner` to your :bash:`.bash_profile` or :bash:`.bashrc`. + * Make sure that non-Python dependencies are loaded if applicable, such as CUDA, OpenCL or HIP. On most clusters it is possible to load (or unload) modules (e.g. CUDA, OpenCL / ROCM). For more information, see `Installation `__. + * Do not forget to make sure the paths are set correctly. If you're using CUDA, the desired CUDA version should be in :bash:`$PATH`, :bash:`$LD_LIBARY_PATH` and :bash:`$CPATH`. + * [Optional] the loading of modules and setting of paths is likely convenient to put in your :bash:`.bash_profile` or :bash:`.bashrc`. +#. `Install Poetry `__: :bash:`curl -sSL https://install.python-poetry.org | python3 -`. +#. Install the project, dependencies and extras: :bash:`poetry install --with test,docs -E cuda -E opencl -E hip`, leaving out :bash:`-E cuda`, :bash:`-E opencl` or :bash:`-E hip` if this does not apply on your system. To go all-out, use :bash:`--all-extras`. + * If you run into "keyring" or other seemingly weird issues, this is a known issue with Poetry on some systems. Do: :bash:`pip install keyring`, :bash:`python3 -m keyring --disable`. + * Depending on the environment, it may be necessary or convenient to install extra packages such as :bash:`cupy-cuda11x` / :bash:`cupy-cuda12x`, and :bash:`cuda-python`. These are currently not defined as dependencies for kernel-tuner, but can be part of tests. +#. Check if the environment is setup correctly by running :bash:`pytest`. All tests should pass, except if you're not on a GPU node, or one or more extras has been left out in the previous step, then these tests will skip gracefully. +#. Set Nox to use the correct backend: + * If you used Mamba in step 2: :bash:`echo "mamba" > noxenv.txt`. + * If you used Miniconda or Anaconda in step 2: :bash:`echo "conda" > noxenv.txt`. + * If you alternatively set up with Venv: :bash:`echo "venv" > noxenv.txt`. + * If you set up with Virtualenv, do not create this file, as this is already the default. + * Be sure to adjust or remove this file when changing backends. -After this command you should be able to run the tests and build the documentation. -See below on how to do that. The ``-e`` flag installs the package in *development mode*. -This means files are not copied, but linked to, such that your installation tracks -changes in the source files. Running tests ------------- -To run the tests you can use ``pytest -v test/`` in the top-level directory. +To run the tests you can use :bash:`nox` (to run against all supported Python versions in isolated environments) and :bash:`pytest` (to run against the local Python version) in the top-level directory. +It's also possible to invoke PyTest from the 'Testing' tab in Visual Studio Code. +The isolated environments can take up to 1 gigabyte in size, so users tight on diskspace can run :bash:`nox` with the :bash:`small-disk` option. This removes the other environment caches before each session is ran. Note that tests that require PyCuda and/or a CUDA capable GPU will be skipped if these are not installed/present. The same holds for tests that require PyOpenCL, Cupy, Nvidia CUDA. -Contributions you make to the Kernel Tuner should not break any of the tests -even if you cannot run them locally. +Contributions you make to the Kernel Tuner should not break any of the tests even if you cannot run them locally. -The examples can be seen as *integration tests* for the Kernel Tuner. Note that -these will also use the installed package. +The examples can be seen as *integration tests* for the Kernel Tuner. +Note that these will also use the installed package. Building documentation ---------------------- Documentation is located in the ``doc/`` directory. This is where you can type ``make html`` to generate the html pages in the ``doc/build/html`` directory. The source files used for building the documentation are located in -``doc/source``. +``doc/source``. To locally inspect the documentation before committing you can browse through the documentation pages generated locally in ``doc/build/html``. -To make sure you have all the dependencies required to build the documentation, -you can install the extras using ``pip install -e .[doc]``. Pandoc is also required, -you can install pandoc on ubuntu using ``sudo apt install pandoc``, for different -setups please see `pandoc's install documentation `__. +To make sure you have all the dependencies required to build the documentation, at least those in ``--with docs``. +Pandoc is also required, you can install pandoc on Ubuntu using ``sudo apt install pandoc`` and on Mac using ``brew install pandoc``. +For different setups please see `pandoc's install documentation `__. The documentation pages hosted online are built automatically using GitHub actions. The documentation pages corresponding to the master branch are hosted in /latest/. diff --git a/INSTALL.rst b/INSTALL.rst index 53fef4926..0ef3c951c 100644 --- a/INSTALL.rst +++ b/INSTALL.rst @@ -1,17 +1,17 @@ Installation ============ -The Kernel Tuner requires several packages to be installed. First of all, you need a -working Python version, several Python packages, and optionally CUDA and/or OpenCL +The Kernel Tuner requires several packages to be installed. First of all, you need a +working Python version, several Python packages, and optionally CUDA and/or OpenCL installations. All of this is explained in detail in this guide. +For comprehensive step-by-step instructions on setting up a development environment, see `Contributing `__. + Python ------ -You need a Python installation. I recommend using Python 3 and -installing it with `Miniconda `__. - +You need a Python installation. We recommend using Python 3 and installing it with `Miniconda `__. Linux users could type the following to download and install Python 3 using Miniconda: .. code-block:: bash @@ -19,17 +19,16 @@ Linux users could type the following to download and install Python 3 using Mini wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -You are of course also free to use your own Python installation, and the Kernel Tuner -is developed to be fully compatible with Python 3.6 and newer. +You are of course also free to use your own Python installation, and the Kernel Tuner is developed to be fully compatible with Python 3.8 and newer. Installing Python Packages -------------------------- -Note that when you are using a native Python installation, the `pip` command used -Kernel Tuner and its dependencies require `sudo` rights for system wide installation. +Note that when you are using a native Python installation, the `pip` command used +Kernel Tuner and its dependencies require `sudo` rights for system wide installation. Sudo rights are typically not required when using Miniconda or virtual environments. -You could also use e.g. the `--user` or `--prefix` option of `pip` to install into +You could also use e.g. the `--user` or `--prefix` option of `pip` to install into your home directory, this requires that your home directory is on your `$PYTHONPATH` environment variable (see for further details the pip documentation). @@ -45,15 +44,15 @@ There are also optional dependencies, explained below. CUDA and PyCUDA --------------- -Installing CUDA and PyCUDA is optional, because you may want to only use Kernel -Tuner for tuning OpenCL or C kernels. +Installing CUDA and PyCUDA is optional, because you may want to only use Kernel +Tuner for tuning OpenCL or C kernels. -If you want to use the Kernel Tuner to tune -CUDA kernels you will first need to install the CUDA toolkit -(https://developer.nvidia.com/cuda-toolkit). A recent version of the -CUDA toolkit (and the PyCUDA Python bindings for CUDA) are -recommended (older version may work, but may not support all features of -Kernel Tuner). +If you want to use the Kernel Tuner to tune +CUDA kernels you will first need to install the CUDA toolkit +(https://developer.nvidia.com/cuda-toolkit). A recent version of the +CUDA toolkit (and the PyCUDA Python bindings for CUDA) are +recommended (older version may work, but may not support all features of +Kernel Tuner). It's very important that you install the CUDA toolkit before trying to install PyCuda. @@ -72,7 +71,7 @@ Or you could install Kernel Tuner and PyCUDA together if you haven't done so alr If you run into trouble with installing PyCuda, make sure you have CUDA installed first. Also make sure that the Python package Numpy is already installed, e.g. using `pip install numpy`. -If you retry the ``pip install pycuda`` command, you may need to use the +If you retry the ``pip install pycuda`` command, you may need to use the ``--no-cache-dir`` option to ensure the pycuda installation really starts over and not continues from an installation that is failing. @@ -82,8 +81,8 @@ If this fails, I recommend to see the PyCuda installation guide (https://wiki.ti OpenCL and PyOpenCL ------------------- -Before we can install PyOpenCL you'll need an OpenCL compiler. There are several -OpenCL compilers available depending on the OpenCL platform you want to your +Before we can install PyOpenCL you'll need an OpenCL compiler. There are several +OpenCL compilers available depending on the OpenCL platform you want to your code to run on. * `AMD APP SDK `__ @@ -94,7 +93,7 @@ code to run on. You can also look at this `OpenCL Installation Guide `__ for PyOpenCL. -As with the CUDA toolkit, recent versions of one or more of the above OpenCL SDK's and +As with the CUDA toolkit, recent versions of one or more of the above OpenCL SDK's and PyOpenCL are recommended to support all features of the Kernel Tuner. After you've installed your OpenCL compiler of choice you can install PyOpenCL using: @@ -114,7 +113,7 @@ If this fails, please see the PyOpenCL installation guide (https://wiki.tiker.ne HIP and PyHIP ------------- -Before we can install PyHIP, you'll need to have the HIP runtime and compiler installed on your system. +Before we can install PyHIP, you'll need to have the HIP runtime and compiler installed on your system. The HIP compiler is included as part of the ROCm software stack. Here is AMD's installation guide: * `ROCm Documentation: HIP Installation Guide `__ @@ -134,34 +133,41 @@ Alternatively, you can install PyHIP from the source code. First, clone the repo Then, navigate to the repository directory and run the following command to install: .. code-block:: bash - + python setup.py install Installing the git version -------------------------- -You can also install from the git repository. This way you also get the -examples. +You can also install from the git repository. This way you also get the examples. +Please note that this will install all required dependencies in the current environment. +For step-by-step instructions on setting up a development environment, see `Contributing `__. .. code-block:: bash git clone https://github.com/benvanwerkhoven/kernel_tuner.git cd kernel_tuner - pip install . + curl -sSL https://install.python-poetry.org | python3 - + poetry install -You can install Kernel Tuner with several optional dependencies, the full list is: +You can install Kernel Tuner with several optional dependencies. +In this we differentiate between development and runtime dependencies. +The development dependencies are ``test`` and ``docs``, and can be installed by appending e.g. ``--with test,docs``. +The runtime dependencies are: - `cuda`: install pycuda along with kernel_tuner - `opencl`: install pycuda along with kernel_tuner - `hip`: install pyhip along with kernel_tuner -- `doc`: installs packages required to build the documentation - `tutorial`: install packages required to run the guides -- `dev`: install everything you need to start development on Kernel Tuner + +These can be installed by appending e.g. ``-E cuda -E opencl -E hip``. +If you want to go all-out, use ``--all-extras``. For example, use: -``` -pip install .[dev,cuda,opencl] -``` +.. code-block:: bash + + poetry install --with test,docs -E cuda -E opencl + To install Kernel Tuner along with all the packages required for development. diff --git a/README.rst b/README.rst index 2f8e8bcc0..9f1739f90 100644 --- a/README.rst +++ b/README.rst @@ -3,32 +3,33 @@ Kernel Tuner |Build Status| |CodeCov Badge| |PyPi Badge| |Zenodo Badge| |SonarCloud Badge| |OpenSSF Badge| |FairSoftware Badge| -Kernel Tuner simplifies the software development of optimized and auto-tuned GPU programs, by enabling Python-based unit testing of GPU code and making it easy to develop scripts for auto-tuning GPU kernels. This also means no extensive changes and no new dependencies are required in the kernel code. The kernels can still be compiled and used as normal from any host programming language. +Kernel Tuner simplifies the software development of optimized and auto-tuned GPU programs, by enabling Python-based unit testing of GPU code and making it easy to develop scripts for auto-tuning GPU kernels. +This also means no extensive changes and no new dependencies are required in the kernel code. +The kernels can still be compiled and used as normal from any host programming language. Kernel Tuner provides a comprehensive solution for auto-tuning GPU programs, supporting auto-tuning of user-defined parameters in both host and device code, supporting output verification of all benchmarked kernels during tuning, as well as many optimization strategies to speed up the tuning process. Documentation ------------- -The full documentation is available -`here `__. +The full documentation is available `here `__. Installation ------------ The easiest way to install the Kernel Tuner is using pip: -To tune CUDA kernels: +To tune CUDA kernels (`detailed instructions `__): - First, make sure you have the `CUDA Toolkit `_ installed - Then type: ``pip install kernel_tuner[cuda]`` -To tune OpenCL kernels: +To tune OpenCL kernels (`detailed instructions `__): - First, make sure you have an OpenCL compiler for your intended OpenCL platform - Then type: ``pip install kernel_tuner[opencl]`` -To tune HIP kernels: +To tune HIP kernels (`detailed instructions `__): - First, make sure you have an HIP runtime and compiler installed - Then type: ``pip install kernel_tuner[hip]`` @@ -38,7 +39,7 @@ Or all: - ``pip install kernel_tuner[cuda,opencl,hip]`` More information about how to install Kernel Tuner and its -dependencies can be found in the `installation guide +dependencies can be found in the `installation guide `__. Example usage @@ -83,12 +84,12 @@ The exact same Python code can be used to tune an OpenCL kernel: } """ -The Kernel Tuner will detect the kernel language and select the right compiler and -runtime. For every kernel in the parameter space, the Kernel Tuner will insert C -preprocessor defines for the tunable parameters, compile, and benchmark the kernel. The -timing results will be printed to the console, but are also returned by tune_kernel to -allow further analysis. Note that this is just the default behavior, what and how -tune_kernel does exactly is controlled through its many `optional arguments +The Kernel Tuner will detect the kernel language and select the right compiler and +runtime. For every kernel in the parameter space, the Kernel Tuner will insert C +preprocessor defines for the tunable parameters, compile, and benchmark the kernel. The +timing results will be printed to the console, but are also returned by tune_kernel to +allow further analysis. Note that this is just the default behavior, what and how +tune_kernel does exactly is controlled through its many `optional arguments `__. You can find many - more extensive - example codes, in the @@ -99,9 +100,9 @@ documentation pages `__. Tuning host and kernel code @@ -172,7 +173,7 @@ If you use Kernel Tuner in research or research software, please cite the most r year = {2021}, url = {https://arxiv.org/abs/2111.14991} } - + @article{schoonhoven2022benchmarking, title={Benchmarking optimization algorithms for auto-tuning GPU kernels}, author={Schoonhoven, Richard and van Werkhoven, Ben and Batenburg, K Joost}, @@ -196,7 +197,7 @@ If you use Kernel Tuner in research or research software, please cite the most r :target: https://github.com/KernelTuner/kernel_tuner/actions/workflows/python-app.yml .. |CodeCov Badge| image:: https://codecov.io/gh/KernelTuner/kernel_tuner/branch/master/graph/badge.svg :target: https://codecov.io/gh/KernelTuner/kernel_tuner -.. |PyPi Badge| image:: https://img.shields.io/pypi/v/kernel_tuner.svg?colorB=blue +.. |PyPi Badge| image:: https://img.shields.io/pypi/v/kernel_tuner.svg?colorB=blue :target: https://pypi.python.org/pypi/kernel_tuner/ .. |Zenodo Badge| image:: https://zenodo.org/badge/54894320.svg :target: https://zenodo.org/badge/latestdoi/54894320 diff --git a/doc/source/conf.py b/doc/source/conf.py index 65fe28d1f..8fd48f56f 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -12,69 +12,79 @@ # All configuration values have a default; values that are commented out # serve to show the default. -import sys import os +import sys +import time + +from sphinx_pyproject import SphinxConfig # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. -sys.path.insert(0, os.path.abspath('../..')) +sys.path.insert(0, os.path.abspath("../..")) + +# -- Project information ----------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information + +# import data from pyproject.toml using https://github.com/sphinx-toolbox/sphinx-pyproject +# additional data can be added with `[tool.sphinx-pyproject]` and retrieved with `config['']`. +config = SphinxConfig( + "../../pyproject.toml", style="poetry" +) # add `, globalns=globals()` to directly insert in namespace +year = time.strftime("%Y") +startyear = "2016" + +project = "python-constraint" +# author = config.author # this is a list of all authors +author = "Ben van Werkhoven" +copyright = f"{startyear}-{year}, {author}" +version = config.version # short version (e.g. 2.6) +release = config.version # full version, including alpha/beta/rc tags. (e.g. 2.6rc1) -# -- General configuration ------------------------------------------------ +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +version = config.version # short version (e.g. 2.6) +release = config.version # full version, including alpha/beta/rc tags. (e.g. 2.6rc1) +# -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. -#needs_sphinx = '1.0' +needs_sphinx = "7.1" # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ - 'nbsphinx', - 'sphinx.ext.autodoc', - 'sphinx.ext.mathjax' + "nbsphinx", + "sphinx.ext.autodoc", + "sphinx.ext.mathjax", + "sphinx.ext.napoleon", ] # Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] +templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] -source_suffix = '.rst' +source_suffix = ".rst" # The encoding of source files. -#source_encoding = 'utf-8-sig' +# source_encoding = 'utf-8-sig' # The master toctree document. -master_doc = 'contents' - -# General information about the project. -project = u'Kernel Tuner' -copyright = u'2016, Ben van Werkhoven' -author = u'Ben van Werkhoven' - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = u'0.4.5' -# The full version, including alpha/beta/rc tags. -release = u'0.4.5' +master_doc = "contents" # The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None +# for a list of supported languages (https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-language). +language = "en" # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: -#today = '' +# today = '' # Else, today_fmt is used as the format for a strftime call. -#today_fmt = '%B %d, %Y' +# today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. @@ -83,27 +93,27 @@ # The reST default role (used for this markup: `text`) to use for all # documents. -#default_role = None +# default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. -#add_function_parentheses = True +# add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). -#add_module_names = True +# add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. -#show_authors = False +# show_authors = False # The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' +pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. -#modindex_common_prefix = [] +# modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. -#keep_warnings = False +# keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False @@ -120,162 +130,155 @@ # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. -#html_theme_options = {} +# html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. -#html_theme_path = [] +# html_theme_path = [] html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] html_context = { - "display_github": True, # Integrate GitHub - "github_user": "KernelTuner", # Username - "github_repo": "kernel_tuner", # Repo name - "github_version": "master", # Version - "conf_py_path": "/doc/source/", # Path in the checkout to the docs root + "display_github": True, # Integrate GitHub + "github_user": "KernelTuner", # Username + "github_repo": "kernel_tuner", # Repo name + "github_version": "master", # Version + "conf_py_path": "/doc/source/", # Path in the checkout to the docs root } # The name for this set of Sphinx documents. # " v documentation" by default. -#html_title = u'kernel_tuner v0.0.1' +# html_title = u'kernel_tuner v0.0.1' # A shorter title for the navigation bar. Default is the same as html_title. -#html_short_title = None +# html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. -#html_logo = None +# html_logo = None # The name of an image file (relative to this directory) to use as a favicon of # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. -#html_favicon = None +# html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] +# html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. -#html_extra_path = [] +# html_extra_path = [] # If not None, a 'Last updated on:' timestamp is inserted at every page # bottom, using the given strftime format. # The empty string is equivalent to '%b %d, %Y'. -#html_last_updated_fmt = None +# html_last_updated_fmt = None # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. -#html_use_smartypants = True +# html_use_smartypants = True # Custom sidebar templates, maps document names to template names. -#html_sidebars = {} +# html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. -#html_additional_pages = {} +# html_additional_pages = {} # If false, no module index is generated. -#html_domain_indices = True +# html_domain_indices = True # If false, no index is generated. -#html_use_index = True +# html_use_index = True # If true, the index is split into individual pages for each letter. -#html_split_index = False +# html_split_index = False # If true, links to the reST sources are added to the pages. -#html_show_sourcelink = True +# html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. -#html_show_sphinx = True +# html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. -#html_show_copyright = True +# html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. -#html_use_opensearch = '' +# html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). -#html_file_suffix = None +# html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' -#html_search_language = 'en' +# html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # 'ja' uses this config value. # 'zh' user can custom change `jieba` dictionary path. -#html_search_options = {'type': 'default'} +# html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. -#html_search_scorer = 'scorer.js' +# html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. -htmlhelp_basename = 'kernel_tunerdoc' +htmlhelp_basename = "kernel_tunerdoc" # -- Options for LaTeX output --------------------------------------------- latex_elements = { -# The paper size ('letterpaper' or 'a4paper'). -#'papersize': 'letterpaper', - -# The font size ('10pt', '11pt' or '12pt'). -#'pointsize': '10pt', - -# Additional stuff for the LaTeX preamble. -#'preamble': '', - -# Latex figure (float) alignment -#'figure_align': 'htbp', + # The paper size ('letterpaper' or 'a4paper'). + #'papersize': 'letterpaper', + # The font size ('10pt', '11pt' or '12pt'). + #'pointsize': '10pt', + # Additional stuff for the LaTeX preamble. + #'preamble': '', + # Latex figure (float) alignment + #'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (master_doc, 'kernel_tuner.tex', u'Kernel Tuner Documentation', - u'Ben van Werkhoven', 'manual'), + (master_doc, "kernel_tuner.tex", "Kernel Tuner Documentation", "Ben van Werkhoven", "manual"), ] # The name of an image file (relative to this directory) to place at the top of # the title page. -#latex_logo = None +# latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. -#latex_use_parts = False +# latex_use_parts = False # If true, show page references after internal links. -#latex_show_pagerefs = False +# latex_show_pagerefs = False # If true, show URL addresses after external links. -#latex_show_urls = False +# latex_show_urls = False # Documents to append as an appendix to all manuals. -#latex_appendices = [] +# latex_appendices = [] # If false, no module index is generated. -#latex_domain_indices = True +# latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). -man_pages = [ - (master_doc, 'kernel_tuner', u'Kernel Tuner Documentation', - [author], 1) -] +man_pages = [(master_doc, "kernel_tuner", "Kernel Tuner Documentation", [author], 1)] # If true, show URL addresses after external links. -#man_show_urls = False +# man_show_urls = False # -- Options for Texinfo output ------------------------------------------- @@ -284,23 +287,28 @@ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (master_doc, 'kernel_tuner', u'Kernel Tuner Documentation', - author, 'kernel_tuner', 'A simple CUDA/OpenCL Auto-Tuner in Python', - 'Miscellaneous'), + ( + master_doc, + "kernel_tuner", + "Kernel Tuner Documentation", + author, + "kernel_tuner", + "A simple CUDA/OpenCL Auto-Tuner in Python", + "Miscellaneous", + ), ] # Documents to append as an appendix to all manuals. -#texinfo_appendices = [] +# texinfo_appendices = [] # If false, no module index is generated. -#texinfo_domain_indices = True +# texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. -#texinfo_show_urls = 'footnote' +# texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. -#texinfo_no_detailmenu = False - +# texinfo_no_detailmenu = False -nbsphinx_execute = 'never' +nbsphinx_execute = "never" diff --git a/doc/source/design.rst b/doc/source/design.rst index 4ca515e26..7b84061ea 100644 --- a/doc/source/design.rst +++ b/doc/source/design.rst @@ -5,31 +5,31 @@ Design documentation ==================== -This section provides detailed information about the design and internals +This section provides detailed information about the design and internals of the Kernel Tuner. **This information is mostly relevant for developers.** -The Kernel Tuner is designed to be extensible and support -different search and execution strategies. The current architecture of +The Kernel Tuner is designed to be extensible and support +different search and execution strategies. The current architecture of the Kernel Tuner can be seen as: .. image:: architecture.png :width: 500pt -At the top we have the kernel code and the Python script that tunes it, +At the top we have the kernel code and the Python script that tunes it, which uses any of the main functions exposed in the user interface. -The strategies are responsible for iterating over and searching through -the search space. The default strategy is ``brute_force``, which -iterates over all valid kernel configurations in the search space. -``random_sample`` simply takes a random sample of the search space. More -advanced strategies are continuously being implemented and improved in +The strategies are responsible for iterating over and searching through +the search space. The default strategy is ``brute_force``, which +iterates over all valid kernel configurations in the search space. +``random_sample`` simply takes a random sample of the search space. More +advanced strategies are continuously being implemented and improved in Kernel Tuner. The full list of supported strategies and how to use these is explained in the :doc:`user-api`, see the options ``strategy`` and ``strategy_options``. -The runners are responsible for compiling and benchmarking the kernel +The runners are responsible for compiling and benchmarking the kernel configurations selected by the strategy. The sequential runner is currently -the only supported runner, which does exactly what its name says. It compiles +the only supported runner, which does exactly what its name says. It compiles and benchmarks configurations using a single sequential Python process. Other runners are foreseen in future releases. @@ -37,26 +37,26 @@ The runners are implemented on top of the core, which implements a high-level *Device Interface*, which wraps all the functionality for compiling and benchmarking kernel configurations based on the low-level *Device Function Interface*. -Currently, we have -five different implementations of the device function interface, which -basically abstracts the different backends into a set of simple -functions such as ``ready_argument_list`` which allocates GPU memory and -moves data to the GPU, and functions like ``compile``, ``benchmark``, or -``run_kernel``. The functions in the core are basically the main +Currently, we have +five different implementations of the device function interface, which +basically abstracts the different backends into a set of simple +functions such as ``ready_argument_list`` which allocates GPU memory and +moves data to the GPU, and functions like ``compile``, ``benchmark``, or +``run_kernel``. The functions in the core are basically the main building blocks for implementing runners. The observers are explained in :ref:`observers`. -At the bottom, the backends are shown. +At the bottom, the backends are shown. PyCUDA, CuPy, cuda-python, PyOpenCL and PyHIP are for tuning either CUDA, OpenCL, or HIP kernels. -The C -Functions implementation can actually call any compiler, typically NVCC -or GCC is used. There is limited support for tuning Fortran kernels. -This backend was created not just to be able to tune C +The C +Functions implementation can actually call any compiler, typically NVCC +or GCC is used. There is limited support for tuning Fortran kernels. +This backend was created not just to be able to tune C functions, but in particular to tune C functions that in turn launch GPU kernels. -The rest of this section contains the API documentation of the modules -discussed above. For the documentation of the user API see the +The rest of this section contains the API documentation of the modules +discussed above. For the documentation of the user API see the :doc:`user-api`. @@ -99,37 +99,37 @@ kernel_tuner.core.DeviceInterface :members: kernel_tuner.backends.pycuda.PyCudaFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.pycuda.PyCudaFunctions :special-members: __init__ :members: kernel_tuner.backends.cupy.CupyFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.cupy.CupyFunctions :special-members: __init__ :members: kernel_tuner.backends.nvcuda.CudaFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.nvcuda.CudaFunctions :special-members: __init__ :members: kernel_tuner.backends.opencl.OpenCLFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.opencl.OpenCLFunctions :special-members: __init__ :members: kernel_tuner.backends.c.CFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.c.CFunctions :special-members: __init__ :members: kernel_tuner.backends.hip.HipFunctions -~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: kernel_tuner.backends.hip.HipFunctions :special-members: __init__ :members: diff --git a/doc/source/docutils.conf b/doc/source/docutils.conf new file mode 100644 index 000000000..1bf4d8323 --- /dev/null +++ b/doc/source/docutils.conf @@ -0,0 +1,2 @@ +[restructuredtext parser] +syntax_highlight = short diff --git a/doc/source/matrix_multiplication.ipynb b/doc/source/matrix_multiplication.ipynb index f1e4897eb..93de527a6 100644 --- a/doc/source/matrix_multiplication.ipynb +++ b/doc/source/matrix_multiplication.ipynb @@ -161,7 +161,7 @@ "As we can see the execution times printed by `tune_kernel` already vary quite dramatically between the different values for `block_size_x` and `block_size_y`. However, even with the best thread block dimensions our kernel is still not very efficient.\n", "\n", "Therefore, we'll have a look at the Nvidia Visual Profiler to find that the utilization of our kernel is actually pretty low:\n", - "![](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul_naive.png)\n", + "![matmul_naive](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul_naive.png)\n", "There is however, a lot of opportunity for data reuse, which is realized by making the threads in a thread block collaborate." ] }, @@ -270,7 +270,7 @@ "source": [ "This kernel drastically reduces memory bandwidth consumption. Compared to our naive kernel, it is about three times faster now, which comes from the highly increased memory utilization:\n", "\n", - "![](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul_shared.png)\n", + "![matmul_shared](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul_shared.png)\n", "\n", "The compute utilization has actually decreased slightly, which is due to the synchronization overhead, because ``__syncthread()`` is called frequently.\n", "\n", @@ -427,7 +427,7 @@ "source": [ "As we can see the number of kernel configurations evaluated by the tuner has increased again. Also the performance has increased quite dramatically with roughly another factor 3. If we look at the Nvidia Visual Profiler output of our kernel we see the following:\n", "\n", - "![](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul.png)\n", + "![matmul](https://raw.githubusercontent.com/kerneltuner/kernel_tuner/master/doc/source/matmul/matmul.png)\n", "\n", "As expected, the compute utilization of our kernel has improved. There may even be some more room for improvement, but our tutorial on how to use Kernel Tuner ends here. In this tutorial, we have seen how you can use Kernel Tuner to tune kernels with a small number of tunable parameters, how to impose restrictions on the parameter space, and how to use grid divisor lists to specify how grid dimensions are computed." ] diff --git a/kernel_tuner/__init__.py b/kernel_tuner/__init__.py index b68ef4c09..b64d69813 100644 --- a/kernel_tuner/__init__.py +++ b/kernel_tuner/__init__.py @@ -1,4 +1,6 @@ from kernel_tuner.integration import store_results, create_device_targets from kernel_tuner.interface import tune_kernel, run_kernel -__version__ = "0.4.5" +from importlib.metadata import version + +__version__ = version(__package__) diff --git a/kernel_tuner/backends/cupy.py b/kernel_tuner/backends/cupy.py index 451bd963d..a1e13ff03 100644 --- a/kernel_tuner/backends/cupy.py +++ b/kernel_tuner/backends/cupy.py @@ -1,15 +1,11 @@ -"""This module contains all Cupy specific kernel_tuner functions""" +"""This module contains all Cupy specific kernel_tuner functions.""" from __future__ import print_function - -import logging -import time import numpy as np from kernel_tuner.backends.backend import GPUBackend from kernel_tuner.observers.cupy import CupyRuntimeObserver - # embedded in try block to be able to generate documentation # and run tests without cupy installed try: @@ -19,10 +15,10 @@ class CupyFunctions(GPUBackend): - """Class that groups the Cupy functions on maintains state about the device""" + """Class that groups the Cupy functions on maintains state about the device.""" def __init__(self, device=0, iterations=7, compiler_options=None, observers=None): - """instantiate CupyFunctions object used for interacting with the CUDA device + """Instantiate CupyFunctions object used for interacting with the CUDA device. Instantiating this object will inspect and store certain device properties at runtime, which are used during compilation and/or execution of kernels by the @@ -39,8 +35,7 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None self.texrefs = [] if not cp: raise ImportError( - "Error: cupy not installed, please install e.g. " - + "using 'pip install cupy', please check https://github.com/cupy/cupy." + "cupy not installed, install using 'pip install cupy', or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#cuda-and-pycuda." ) # select device @@ -88,7 +83,7 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None self.name = env["device_name"] def ready_argument_list(self, arguments): - """ready argument list to be passed to the kernel, allocates gpu mem + """Ready argument list to be passed to the kernel, allocates gpu mem. :param arguments: List of arguments to be passed to the kernel. The order should match the argument list on the CUDA kernel. @@ -111,7 +106,7 @@ def ready_argument_list(self, arguments): return gpu_args def compile(self, kernel_instance): - """call the CUDA compiler to compile the kernel, return the device function + """Call the CUDA compiler to compile the kernel, return the device function. :param kernel_name: The name of the kernel to be compiled, used to lookup the function after compilation. @@ -140,23 +135,23 @@ def compile(self, kernel_instance): return self.func def start_event(self): - """Records the event that marks the start of a measurement""" + """Records the event that marks the start of a measurement.""" self.start.record(stream=self.stream) def stop_event(self): - """Records the event that marks the end of a measurement""" + """Records the event that marks the end of a measurement.""" self.end.record(stream=self.stream) def kernel_finished(self): - """Returns True if the kernel has finished, False otherwise""" + """Returns True if the kernel has finished, False otherwise.""" return self.end.done def synchronize(self): - """Halts execution until device has finished its tasks""" + """Halts execution until device has finished its tasks.""" self.dev.synchronize() def copy_constant_memory_args(self, cmem_args): - """adds constant memory arguments to the most recently compiled module + """Adds constant memory arguments to the most recently compiled module. :param cmem_args: A dictionary containing the data to be passed to the device constant memory. The format to be used is as follows: A @@ -171,11 +166,11 @@ def copy_constant_memory_args(self, cmem_args): constant_mem[:] = cp.asarray(v) def copy_shared_memory_args(self, smem_args): - """add shared memory arguments to the kernel""" + """Add shared memory arguments to the kernel.""" self.smem_size = smem_args["size"] def copy_texture_memory_args(self, texmem_args): - """adds texture memory arguments to the most recently compiled module + """Adds texture memory arguments to the most recently compiled module. :param texmem_args: A dictionary containing the data to be passed to the device texture memory. See tune_kernel(). @@ -184,7 +179,7 @@ def copy_texture_memory_args(self, texmem_args): raise NotImplementedError("CuPy backend does not support texture memory") def run_kernel(self, func, gpu_args, threads, grid, stream=None): - """runs the CUDA kernel passed as 'func' + """Runs the CUDA kernel passed as 'func'. :param func: A cupy kernel compiled for this specific kernel configuration :type func: cupy.RawKernel @@ -205,7 +200,7 @@ def run_kernel(self, func, gpu_args, threads, grid, stream=None): func(grid, threads, gpu_args, stream=stream, shared_mem=self.smem_size) def memset(self, allocation, value, size): - """set the memory in allocation to the value in value + """Set the memory in allocation to the value in value. :param allocation: A GPU memory allocation unit :type allocation: cupy.ndarray @@ -220,7 +215,7 @@ def memset(self, allocation, value, size): allocation[:] = value def memcpy_dtoh(self, dest, src): - """perform a device to host memory copy + """Perform a device to host memory copy. :param dest: A numpy array in host memory to store the data :type dest: numpy.ndarray @@ -237,7 +232,7 @@ def memcpy_dtoh(self, dest, src): raise ValueError("dest type not supported") def memcpy_htod(self, dest, src): - """perform a host to device memory copy + """Perform a host to device memory copy. :param dest: A GPU memory allocation unit :type dest: cupy.ndarray diff --git a/kernel_tuner/backends/hip.py b/kernel_tuner/backends/hip.py index 4cd0f6b69..470841621 100644 --- a/kernel_tuner/backends/hip.py +++ b/kernel_tuner/backends/hip.py @@ -1,20 +1,17 @@ -"""This module contains all HIP specific kernel_tuner functions""" +"""This module contains all HIP specific kernel_tuner functions.""" -import numpy as np import ctypes import ctypes.util -import sys import logging +import numpy as np + from kernel_tuner.backends.backend import GPUBackend from kernel_tuner.observers.hip import HipRuntimeObserver -# embedded in try block to be able to generate documentation -# and run tests without pyhip installed try: from pyhip import hip, hiprtc except ImportError: - print("Not able to import pyhip, check if PYTHONPATH includes PyHIP") hip = None hiprtc = None @@ -35,10 +32,10 @@ hipSuccess = 0 class HipFunctions(GPUBackend): - """Class that groups the HIP functions on maintains state about the device""" + """Class that groups the HIP functions on maintains state about the device.""" def __init__(self, device=0, iterations=7, compiler_options=None, observers=None): - """instantiate HipFunctions object used for interacting with the HIP device + """Instantiate HipFunctions object used for interacting with the HIP device. Instantiating this object will inspect and store certain device properties at runtime, which are used during compilation and/or execution of kernels by the @@ -51,8 +48,13 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None :param iterations: Number of iterations used while benchmarking a kernel, 7 by default. :type iterations: int """ + if not hip or not hiprtc: + raise ImportError("Unable to import PyHIP, make sure PYTHONPATH includes PyHIP, or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#hip-and-pyhip.") + + # embedded in try block to be able to generate documentation + # and run tests without pyhip installed logging.debug("HipFunction instantiated") - + self.hipProps = hip.hipGetDeviceProperties(device) self.name = self.hipProps._name.decode('utf-8') @@ -85,13 +87,13 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None def ready_argument_list(self, arguments): - """ready argument list to be passed to the HIP function + """Ready argument list to be passed to the HIP function. :param arguments: List of arguments to be passed to the HIP function. The order should match the argument list on the HIP function. Allowed values are np.ndarray, and/or np.int32, np.float32, and so on. :type arguments: list(numpy objects) - + :returns: Ctypes structure of arguments to be passed to the HIP function. :rtype: ctypes structure """ @@ -109,22 +111,22 @@ def ready_argument_list(self, arguments): hip.hipMemcpy_htod(device_ptr, data_ctypes, arg.nbytes) ctype_args.append(device_ptr) else: - raise TypeError("unknown dtype for ndarray") - # Convert valid non-array arguments to ctypes + raise TypeError("unknown dtype for ndarray") + # Convert valid non-array arguments to ctypes elif isinstance(arg, np.generic): data_ctypes = dtype_map[dtype_str](arg) - ctype_args.append(data_ctypes) + ctype_args.append(data_ctypes) return ctype_args - - + + def compile(self, kernel_instance): - """call the HIP compiler to compile the kernel, return the function - + """Call the HIP compiler to compile the kernel, return the function. + :param kernel_instance: An object representing the specific instance of the tunable kernel in the parameter space. :type kernel_instance: kernel_tuner.core.KernelInstance - + :returns: An ctypes function that can be called directly. :rtype: ctypes._FuncPtr """ @@ -136,7 +138,7 @@ def compile(self, kernel_instance): if 'extern "C"' not in kernel_string: kernel_string = 'extern "C" {\n' + kernel_string + "\n}" kernel_ptr = hiprtc.hiprtcCreateProgram(kernel_string, kernel_name, [], []) - + try: #Compile based on device (Not yet tested for non-AMD devices) plat = hip.hipGetPlatformName() @@ -148,7 +150,7 @@ def compile(self, kernel_instance): options_list = [] options_list.extend(self.compiler_options) hiprtc.hiprtcCompileProgram(kernel_ptr, options_list) - + #Get module and kernel from compiled kernel string code = hiprtc.hiprtcGetCode(kernel_ptr) module = hip.hipModuleLoadData(code) @@ -159,36 +161,36 @@ def compile(self, kernel_instance): log = hiprtc.hiprtcGetProgramLog(kernel_ptr) print(log) raise e - + return kernel - + def start_event(self): - """Records the event that marks the start of a measurement""" + """Records the event that marks the start of a measurement.""" logging.debug("HipFunction start_event called") hip.hipEventRecord(self.start, self.stream) def stop_event(self): - """Records the event that marks the end of a measurement""" + """Records the event that marks the end of a measurement.""" logging.debug("HipFunction stop_event called") hip.hipEventRecord(self.end, self.stream) def kernel_finished(self): - """Returns True if the kernel has finished, False otherwise""" + """Returns True if the kernel has finished, False otherwise.""" logging.debug("HipFunction kernel_finished called") - + # Query the status of the event return hip.hipEventQuery(self.end) def synchronize(self): - """Halts execution until device has finished its tasks""" + """Halts execution until device has finished its tasks.""" logging.debug("HipFunction synchronize called") hip.hipDeviceSynchronize() def run_kernel(self, func, gpu_args, threads, grid, stream=None): - """runs the HIP kernel passed as 'func' + """Runs the HIP kernel passed as 'func'. :param func: A HIP kernel compiled for this specific kernel configuration :type func: ctypes pionter @@ -222,15 +224,15 @@ def __getitem__(self, key): ctype_args = ArgListStructure(*gpu_args) - hip.hipModuleLaunchKernel(func, - grid[0], grid[1], grid[2], + hip.hipModuleLaunchKernel(func, + grid[0], grid[1], grid[2], threads[0], threads[1], threads[2], self.smem_size, stream, ctype_args) def memset(self, allocation, value, size): - """set the memory in allocation to the value in value + """Set the memory in allocation to the value in value. :param allocation: A GPU memory allocation unit :type allocation: ctypes ptr @@ -243,11 +245,11 @@ def memset(self, allocation, value, size): """ logging.debug("HipFunction memset called") - + hip.hipMemset(allocation, value, size) def memcpy_dtoh(self, dest, src): - """perform a device to host memory copy + """Perform a device to host memory copy. :param dest: A numpy array in host memory to store the data :type dest: numpy.ndarray @@ -263,7 +265,7 @@ def memcpy_dtoh(self, dest, src): hip.hipMemcpy_dtoh(dest_c, src, dest.nbytes) def memcpy_htod(self, dest, src): - """perform a host to device memory copy + """Perform a host to device memory copy. :param dest: A GPU memory allocation unit :type dest: ctypes ptr @@ -279,7 +281,7 @@ def memcpy_htod(self, dest, src): hip.hipMemcpy_htod(dest, src_c, src.nbytes) def copy_constant_memory_args(self, cmem_args): - """adds constant memory arguments to the most recently compiled module + """Adds constant memory arguments to the most recently compiled module. :param cmem_args: A dictionary containing the data to be passed to the device constant memory. The format to be used is as follows: A @@ -301,12 +303,13 @@ def copy_constant_memory_args(self, cmem_args): hip.hipMemcpy_htod(symbol_ptr, v_c, v.nbytes) def copy_shared_memory_args(self, smem_args): - """add shared memory arguments to the kernel""" + """Add shared memory arguments to the kernel.""" logging.debug("HipFunction copy_shared_memory_args called") self.smem_size = smem_args["size"] def copy_texture_memory_args(self, texmem_args): + """Copy texture memory arguments. Not yet implemented.""" logging.debug("HipFunction copy_texture_memory_args called") raise NotImplementedError("HIP backend does not support texture memory") diff --git a/kernel_tuner/backends/nvcuda.py b/kernel_tuner/backends/nvcuda.py index 32aa8efe5..c6fb73d5e 100644 --- a/kernel_tuner/backends/nvcuda.py +++ b/kernel_tuner/backends/nvcuda.py @@ -1,4 +1,4 @@ -"""This module contains all NVIDIA cuda-python specific kernel_tuner functions""" +"""This module contains all NVIDIA cuda-python specific kernel_tuner functions.""" import numpy as np from kernel_tuner.backends.backend import GPUBackend @@ -14,10 +14,10 @@ class CudaFunctions(GPUBackend): - """Class that groups the Cuda functions on maintains state about the device""" + """Class that groups the Cuda functions on maintains state about the device.""" def __init__(self, device=0, iterations=7, compiler_options=None, observers=None): - """instantiate CudaFunctions object used for interacting with the CUDA device + """Instantiate CudaFunctions object used for interacting with the CUDA device. Instantiating this object will inspect and store certain device properties at runtime, which are used during compilation and/or execution of kernels by the @@ -38,8 +38,7 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None self.texrefs = [] if not cuda: raise ImportError( - "Error: cuda-python not installed, please install e.g. " - + "using 'pip install cuda-python', please check https://github.com/NVIDIA/cuda-python." + "cuda-python not installed, install using 'pip install cuda-python', or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#cuda-and-pycuda." ) # initialize and select device @@ -113,7 +112,7 @@ def __del__(self): cuda_error_check(err) def ready_argument_list(self, arguments): - """ready argument list to be passed to the kernel, allocates gpu mem + """Ready argument list to be passed to the kernel, allocates gpu mem. :param arguments: List of arguments to be passed to the kernel. The order should match the argument list on the CUDA kernel. @@ -138,7 +137,7 @@ def ready_argument_list(self, arguments): return gpu_args def compile(self, kernel_instance): - """call the CUDA compiler to compile the kernel, return the device function + """Call the CUDA compiler to compile the kernel, return the device function. :param kernel_name: The name of the kernel to be compiled, used to lookup the function after compilation. @@ -203,17 +202,17 @@ def compile(self, kernel_instance): return self.func def start_event(self): - """Records the event that marks the start of a measurement""" + """Records the event that marks the start of a measurement.""" err = cudart.cudaEventRecord(self.start, self.stream) cuda_error_check(err) def stop_event(self): - """Records the event that marks the end of a measurement""" + """Records the event that marks the end of a measurement.""" err = cudart.cudaEventRecord(self.end, self.stream) cuda_error_check(err) def kernel_finished(self): - """Returns True if the kernel has finished, False otherwise""" + """Returns True if the kernel has finished, False otherwise.""" err = cudart.cudaEventQuery(self.end) if err[0] == cudart.cudaError_t.cudaSuccess: return True @@ -222,12 +221,12 @@ def kernel_finished(self): @staticmethod def synchronize(): - """Halts execution until device has finished its tasks""" + """Halts execution until device has finished its tasks.""" err = cudart.cudaDeviceSynchronize() cuda_error_check(err) def copy_constant_memory_args(self, cmem_args): - """adds constant memory arguments to the most recently compiled module + """Adds constant memory arguments to the most recently compiled module. :param cmem_args: A dictionary containing the data to be passed to the device constant memory. The format to be used is as follows: A @@ -243,11 +242,11 @@ def copy_constant_memory_args(self, cmem_args): cuda_error_check(err) def copy_shared_memory_args(self, smem_args): - """add shared memory arguments to the kernel""" + """Add shared memory arguments to the kernel.""" self.smem_size = smem_args["size"] def copy_texture_memory_args(self, texmem_args): - """adds texture memory arguments to the most recently compiled module + """Adds texture memory arguments to the most recently compiled module. :param texmem_args: A dictionary containing the data to be passed to the device texture memory. See tune_kernel(). @@ -256,7 +255,7 @@ def copy_texture_memory_args(self, texmem_args): raise NotImplementedError("NVIDIA CUDA backend does not support texture memory") def run_kernel(self, func, gpu_args, threads, grid, stream=None): - """runs the CUDA kernel passed as 'func' + """Runs the CUDA kernel passed as 'func'. :param func: A CUDA kernel compiled for this specific kernel configuration :type func: cuda.CUfunction @@ -298,7 +297,7 @@ def run_kernel(self, func, gpu_args, threads, grid, stream=None): @staticmethod def memset(allocation, value, size): - """set the memory in allocation to the value in value + """Set the memory in allocation to the value in value. :param allocation: A GPU memory allocation unit :type allocation: cupy.ndarray @@ -315,7 +314,7 @@ def memset(allocation, value, size): @staticmethod def memcpy_dtoh(dest, src): - """perform a device to host memory copy + """Perform a device to host memory copy. :param dest: A numpy array in host memory to store the data :type dest: numpy.ndarray @@ -328,7 +327,7 @@ def memcpy_dtoh(dest, src): @staticmethod def memcpy_htod(dest, src): - """perform a host to device memory copy + """Perform a host to device memory copy. :param dest: A GPU memory allocation unit :type dest: cuda.CUdeviceptr diff --git a/kernel_tuner/backends/opencl.py b/kernel_tuner/backends/opencl.py index eaf37a469..af3be1c00 100644 --- a/kernel_tuner/backends/opencl.py +++ b/kernel_tuner/backends/opencl.py @@ -1,6 +1,6 @@ -"""This module contains all OpenCL specific kernel_tuner functions""" +"""This module contains all OpenCL specific kernel_tuner functions.""" from __future__ import print_function -import time + import numpy as np from kernel_tuner.backends.backend import GPUBackend @@ -14,12 +14,12 @@ class OpenCLFunctions(GPUBackend): - """Class that groups the OpenCL functions on maintains some state about the device""" + """Class that groups the OpenCL functions on maintains some state about the device.""" def __init__( self, device=0, platform=0, iterations=7, compiler_options=None, observers=None ): - """Creates OpenCL device context and reads device properties + """Creates OpenCL device context and reads device properties. :param device: The ID of the OpenCL device to use for benchmarking :type device: int @@ -29,7 +29,7 @@ def __init__( """ if not cl: raise ImportError( - "Error: pyopencl not installed, please install e.g. using 'pip install pyopencl'." + "pyopencl not installed, install using 'pip install pyopencl', or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#opencl-and-pyopencl." ) self.iterations = iterations @@ -69,7 +69,7 @@ def __init__( self.name = dev.name def ready_argument_list(self, arguments): - """ready argument list to be passed to the kernel, allocates gpu mem + """Ready argument list to be passed to the kernel, allocates gpu mem. :param arguments: List of arguments to be passed to the kernel. The order should match the argument list on the OpenCL kernel. @@ -96,7 +96,7 @@ def ready_argument_list(self, arguments): return gpu_args def compile(self, kernel_instance): - """call the OpenCL compiler to compile the kernel, return the device function + """Call the OpenCL compiler to compile the kernel, return the device function. :param kernel_name: The name of the kernel to be compiled, used to lookup the function after compilation. @@ -115,27 +115,29 @@ def compile(self, kernel_instance): return func def start_event(self): - """Records the event that marks the start of a measurement + """Records the event that marks the start of a measurement. - In OpenCL the event is created when the kernel is launched""" + In OpenCL the event is created when the kernel is launched + """ pass def stop_event(self): - """Records the event that marks the end of a measurement + """Records the event that marks the end of a measurement. - In OpenCL the event is created when the kernel is launched""" + In OpenCL the event is created when the kernel is launched + """ pass def kernel_finished(self): - """Returns True if the kernel has finished, False otherwise""" + """Returns True if the kernel has finished, False otherwise.""" return self.event.get_info(cl.event_info.COMMAND_EXECUTION_STATUS) == 0 def synchronize(self): - """Halts execution until device has finished its tasks""" + """Halts execution until device has finished its tasks.""" self.queue.finish() def run_kernel(self, func, gpu_args, threads, grid): - """runs the OpenCL kernel passed as 'func' + """Runs the OpenCL kernel passed as 'func'. :param func: An OpenCL Kernel :type func: pyopencl.Kernel @@ -158,7 +160,7 @@ def run_kernel(self, func, gpu_args, threads, grid): self.event = func(self.queue, global_size, local_size, *gpu_args) def memset(self, buffer, value, size): - """set the memory in allocation to the value in value + """Set the memory in allocation to the value in value. :param allocation: An OpenCL Buffer to fill :type allocation: pyopencl.Buffer @@ -178,7 +180,7 @@ def memset(self, buffer, value, size): cl.enqueue_copy(self.queue, buffer, src) def memcpy_dtoh(self, dest, src): - """perform a device to host memory copy + """Perform a device to host memory copy. :param dest: A numpy array in host memory to store the data :type dest: numpy.ndarray @@ -190,7 +192,7 @@ def memcpy_dtoh(self, dest, src): cl.enqueue_copy(self.queue, dest, src) def memcpy_htod(self, dest, src): - """perform a host to device memory copy + """Perform a host to device memory copy. :param dest: An OpenCL Buffer to copy data from :type dest: pyopencl.Buffer diff --git a/kernel_tuner/backends/pycuda.py b/kernel_tuner/backends/pycuda.py index 694a63885..3c168f824 100644 --- a/kernel_tuner/backends/pycuda.py +++ b/kernel_tuner/backends/pycuda.py @@ -1,14 +1,14 @@ -"""This module contains all CUDA specific kernel_tuner functions""" +"""This module contains all CUDA specific kernel_tuner functions.""" from __future__ import print_function import logging -import time + import numpy as np from kernel_tuner.backends.backend import GPUBackend +from kernel_tuner.observers.nvml import nvml # noqa F401 from kernel_tuner.observers.pycuda import PyCudaRuntimeObserver -from kernel_tuner.observers.nvml import nvml -from kernel_tuner.util import TorchPlaceHolder, SkippableFailure +from kernel_tuner.util import SkippableFailure, TorchPlaceHolder # embedded in try block to be able to generate documentation # and run tests without pycuda installed @@ -41,7 +41,7 @@ def __init__(self): class Holder(drv.PointerHolderBase): - """class to interoperate torch device memory allocations with PyCUDA""" + """class to interoperate torch device memory allocations with PyCUDA.""" def __init__(self, tensor): super(Holder, self).__init__() @@ -53,10 +53,10 @@ def get_pointer(self): class PyCudaFunctions(GPUBackend): - """Class that groups the CUDA functions on maintains state about the device""" + """Class that groups the CUDA functions on maintains state about the device.""" def __init__(self, device=0, iterations=7, compiler_options=None, observers=None): - """instantiate PyCudaFunctions object used for interacting with the CUDA device + """Instantiate PyCudaFunctions object used for interacting with the CUDA device. Instantiating this object will inspect and store certain device properties at runtime, which are used during compilation and/or execution of kernels by the @@ -74,7 +74,7 @@ def __init__(self, device=0, iterations=7, compiler_options=None, observers=None # if not PyCuda available, check if mocking before raising exception if not pycuda_available and isinstance(drv, PyCudaPlaceHolder): raise ImportError( - "Error: pycuda not installed, please install e.g. using 'pip install pycuda'." + "pycuda not installed, install using 'pip install pycuda', or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#cuda-and-pycuda." ) drv.init() @@ -154,7 +154,7 @@ def __del__(self): gpu_mem.free() def ready_argument_list(self, arguments): - """ready argument list to be passed to the kernel, allocates gpu mem + """Ready argument list to be passed to the kernel, allocates gpu mem. :param arguments: List of arguments to be passed to the kernel. The order should match the argument list on the CUDA kernel. @@ -186,7 +186,7 @@ def ready_argument_list(self, arguments): return gpu_args def compile(self, kernel_instance): - """call the CUDA compiler to compile the kernel, return the device function + """Call the CUDA compiler to compile the kernel, return the device function. :param kernel_name: The name of the kernel to be compiled, used to lookup the function after compilation. @@ -226,23 +226,23 @@ def compile(self, kernel_instance): raise e def start_event(self): - """Records the event that marks the start of a measurement""" + """Records the event that marks the start of a measurement.""" self.start.record(stream=self.stream) def stop_event(self): - """Records the event that marks the end of a measurement""" + """Records the event that marks the end of a measurement.""" self.end.record(stream=self.stream) def kernel_finished(self): - """Returns True if the kernel has finished, False otherwise""" + """Returns True if the kernel has finished, False otherwise.""" return self.end.query() def synchronize(self): - """Halts execution until device has finished its tasks""" + """Halts execution until device has finished its tasks.""" self.context.synchronize() def copy_constant_memory_args(self, cmem_args): - """adds constant memory arguments to the most recently compiled module + """Adds constant memory arguments to the most recently compiled module. :param cmem_args: A dictionary containing the data to be passed to the device constant memory. The format to be used is as follows: A @@ -263,17 +263,16 @@ def copy_constant_memory_args(self, cmem_args): drv.memcpy_htod(symbol, v) def copy_shared_memory_args(self, smem_args): - """add shared memory arguments to the kernel""" + """Add shared memory arguments to the kernel.""" self.smem_size = smem_args["size"] def copy_texture_memory_args(self, texmem_args): - """adds texture memory arguments to the most recently compiled module + """Adds texture memory arguments to the most recently compiled module. :param texmem_args: A dictionary containing the data to be passed to the device texture memory. See tune_kernel(). :type texmem_args: dict """ - filter_mode_map = { "point": drv.filter_mode.POINT, "linear": drv.filter_mode.LINEAR, @@ -326,7 +325,7 @@ def copy_texture_memory_args(self, texmem_args): tex.set_flags(tex.get_flags() | drv.TRSF_NORMALIZED_COORDINATES) def run_kernel(self, func, gpu_args, threads, grid, stream=None): - """runs the CUDA kernel passed as 'func' + """Runs the CUDA kernel passed as 'func'. :param func: A PyCuda kernel compiled for this specific kernel configuration :type func: pycuda.driver.Function @@ -356,7 +355,7 @@ def run_kernel(self, func, gpu_args, threads, grid, stream=None): ) def memset(self, allocation, value, size): - """set the memory in allocation to the value in value + """Set the memory in allocation to the value in value. :param allocation: A GPU memory allocation unit :type allocation: pycuda.driver.DeviceAllocation @@ -371,7 +370,7 @@ def memset(self, allocation, value, size): drv.memset_d8(allocation, value, size) def memcpy_dtoh(self, dest, src): - """perform a device to host memory copy + """Perform a device to host memory copy. :param dest: A numpy array in host memory to store the data :type dest: numpy.ndarray @@ -385,7 +384,7 @@ def memcpy_dtoh(self, dest, src): dest[:] = src def memcpy_htod(self, dest, src): - """perform a host to device memory copy + """Perform a host to device memory copy. :param dest: A GPU memory allocation unit :type dest: pycuda.driver.DeviceAllocation diff --git a/kernel_tuner/energy/energy.py b/kernel_tuner/energy/energy.py index 55306a09c..ab0582c52 100644 --- a/kernel_tuner/energy/energy.py +++ b/kernel_tuner/energy/energy.py @@ -1,13 +1,9 @@ -""" -This module contains a set of helper functions specifically for auto-tuning codes -for energy efficiency. -""" -from collections import OrderedDict - +"""This module contains a set of helper functions specifically for auto-tuning codes for energy efficiency.""" import numpy as np +from scipy import optimize + from kernel_tuner import tune_kernel, util from kernel_tuner.observers.nvml import NVMLObserver, get_nvml_gr_clocks -from scipy import optimize try: import pycuda.driver as drv @@ -42,8 +38,7 @@ """ def get_frequency_power_relation_fp32(device, n_samples=10, nvidia_smi_fallback=None, use_locked_clocks=False, cache=None, simulation_mode=None): - """ Use NVML and PyCUDA with a synthetic kernel to obtain samples of frequency-power pairs """ - + """Use NVML and PyCUDA with a synthetic kernel to obtain samples of frequency-power pairs.""" # get some numbers about the device if not cache: if drv is None: @@ -70,14 +65,14 @@ def get_frequency_power_relation_fp32(device, n_samples=10, nvidia_smi_fallback= arguments = [data] # setup tunable parameters - tune_params = OrderedDict() + tune_params = dict() tune_params["block_size_x"] = [max_block_dim_x] tune_params["nr_outer"] = [64] tune_params["nr_inner"] = [1024] tune_params.update(nvml_gr_clocks) # metrics - metrics = OrderedDict() + metrics = dict() metrics["f"] = lambda p: p["core_freq"] nvmlobserver = NVMLObserver( @@ -95,12 +90,12 @@ def get_frequency_power_relation_fp32(device, n_samples=10, nvidia_smi_fallback= def estimated_voltage(clocks, clock_threshold, voltage_scale): - """ estimate voltage based on clock_threshold and voltage_scale """ + """Estimate voltage based on clock_threshold and voltage_scale.""" return [1 + ((clock > clock_threshold) * (1e-3 * voltage_scale * (clock-clock_threshold))) for clock in clocks] def estimated_power(clocks, clock_threshold, voltage_scale, clock_scale, power_max): - """ estimate power consumption based on clock threshold, clock_scale and max power """ + """Estimate power consumption based on clock threshold, clock_scale and max power.""" n = len(clocks) powers = np.zeros(n) @@ -116,7 +111,7 @@ def estimated_power(clocks, clock_threshold, voltage_scale, clock_scale, power_m def fit_power_frequency_model(freqs, nvml_power): - """ Fit the power-frequency model based on frequency and power measurements """ + """Fit the power-frequency model based on frequency and power measurements.""" nvml_gr_clocks = np.array(freqs) nvml_power = np.array(nvml_power) @@ -148,7 +143,7 @@ def fit_power_frequency_model(freqs, nvml_power): def create_power_frequency_model(device=0, n_samples=10, verbose=False, nvidia_smi_fallback=None, use_locked_clocks=False, cache=None, simulation_mode=None): - """ Calculate the most energy-efficient clock frequency of device + """Calculate the most energy-efficient clock frequency of device. This function uses a performance model to fit the power-frequency curve using a synthethic benchmarking kernel. The method has been described in: @@ -202,8 +197,7 @@ def create_power_frequency_model(device=0, n_samples=10, verbose=False, nvidia_s def get_frequency_range_around_ridge(ridge_frequency, all_frequencies, freq_range, number_of_freqs, verbose=False): - """ Return number_of_freqs frequencies in a freq_range percentage around the ridge_frequency from among all_frequencies """ - + """Return number_of_freqs frequencies in a freq_range percentage around the ridge_frequency from among all_frequencies.""" min_freq = 1e-2 * (100 - int(freq_range)) * ridge_frequency max_freq = 1e-2 * (100 + int(freq_range)) * ridge_frequency frequency_selection = np.unique([all_frequencies[np.argmin(abs( diff --git a/kernel_tuner/file_utils.py b/kernel_tuner/file_utils.py index 0d5024187..e5d3dcb90 100644 --- a/kernel_tuner/file_utils.py +++ b/kernel_tuner/file_utils.py @@ -1,13 +1,13 @@ -""" This module contains utility functions for operations on files, mostly JSON cache files """ +"""This module contains utility functions for operations on files, mostly JSON cache files.""" -import os import json +import os import subprocess -import xmltodict -from sys import platform +from importlib.metadata import PackageNotFoundError, requires, version from pathlib import Path +from sys import platform -from importlib.metadata import requires, version, PackageNotFoundError +import xmltodict from packaging.requirements import Requirement from kernel_tuner import util @@ -16,7 +16,7 @@ def output_file_schema(target): - """Get the requested JSON schema and the version number + """Get the requested JSON schema and the version number. :param target: Name of the T4 schema to return, should be any of ['output', 'metadata'] :type target: string @@ -33,7 +33,7 @@ def output_file_schema(target): def get_configuration_validity(objective) -> str: - """Convert internal Kernel Tuner error to string""" + """Convert internal Kernel Tuner error to string.""" errorstring: str if not isinstance(objective, util.ErrorConfig): errorstring = "correct" @@ -50,21 +50,21 @@ def get_configuration_validity(objective) -> str: def filename_ensure_json_extension(filename: str) -> str: - """Check if the filename has a .json extension, if not, add it""" + """Check if the filename has a .json extension, if not, add it.""" if filename[-5:] != ".json": filename += ".json" return filename def make_filenamepath(filenamepath: Path): - """Create the given path to a filename if the path does not yet exist""" + """Create the given path to a filename if the path does not yet exist.""" filepath = filenamepath.parents[0] if not filepath.exists(): filepath.mkdir() def store_output_file(output_filename: str, results, tune_params, objective="time"): - """Store the obtained auto-tuning results in a JSON output file + """Store the obtained auto-tuning results in a JSON output file. This function produces a JSON file that adheres to the T4 auto-tuning output JSON schema. @@ -75,7 +75,7 @@ def store_output_file(output_filename: str, results, tune_params, objective="tim :type results: list of dicts :param tune_params: Tunable parameters as passed to tune_kernel - :type tune_params: OrderedDict + :type tune_params: dict :param objective: The objective used during auto-tuning, default is 'time'. :type objective: string @@ -140,7 +140,7 @@ def store_output_file(output_filename: str, results, tune_params, objective="tim def get_dependencies(package="kernel_tuner"): - """Get the Python dependencies of Kernel Tuner currently installed and their version numbers""" + """Get the Python dependencies of Kernel Tuner currently installed and their version numbers.""" requirements = requires(package) deps = [Requirement(req).name for req in requirements] depends = [] @@ -155,7 +155,7 @@ def get_dependencies(package="kernel_tuner"): def get_device_query(target): - """Get the information about GPUs in the current system, target is any of ['nvidia', 'amd']""" + """Get the information about GPUs in the current system, target is any of ['nvidia', 'amd'].""" if target == "nvidia": nvidia_smi_out = subprocess.run(["nvidia-smi", "--query", "-x"], capture_output=True) nvidia_smi = xmltodict.parse(nvidia_smi_out.stdout) @@ -176,7 +176,7 @@ def get_device_query(target): def store_metadata_file(metadata_filename: str): - """Store the metadata about the current hardware and software environment in a JSON output file + """Store the metadata about the current hardware and software environment in a JSON output file. This function produces a JSON file that adheres to the T4 auto-tuning metadata JSON schema. diff --git a/kernel_tuner/interface.py b/kernel_tuner/interface.py index b72753377..087a26f59 100644 --- a/kernel_tuner/interface.py +++ b/kernel_tuner/interface.py @@ -1,4 +1,4 @@ -"""Kernel Tuner interface module +"""Kernel Tuner interface module. This module contains the main functions that Kernel Tuner offers to its users. @@ -23,18 +23,15 @@ See the License for the specific language governing permissions and limitations under the License. """ -import sys -from collections import OrderedDict -from datetime import datetime import logging -import numpy +from datetime import datetime from time import perf_counter -from kernel_tuner.integration import get_objective_defaults +import numpy -import kernel_tuner.util as util import kernel_tuner.core as core - +import kernel_tuner.util as util +from kernel_tuner.integration import get_objective_defaults from kernel_tuner.runners.sequential import SequentialRunner from kernel_tuner.runners.simulation import SimulationRunner from kernel_tuner.searchspace import Searchspace @@ -45,21 +42,21 @@ torch = util.TorchPlaceHolder() from kernel_tuner.strategies import ( + basinhopping, + bayes_opt, brute_force, - random_sample, diff_evo, - minimize, - basinhopping, + dual_annealing, + firefly_algorithm, genetic_algorithm, + greedy_ils, + greedy_mls, + minimize, mls, + ordered_greedy_mls, pso, + random_sample, simulated_annealing, - firefly_algorithm, - bayes_opt, - greedy_mls, - greedy_ils, - ordered_greedy_mls, - dual_annealing, ) strategy_map = { @@ -81,8 +78,8 @@ } -class Options(OrderedDict): - """read-only class for passing options around""" +class Options(dict): + """read-only class for passing options around.""" def __getattr__(self, name): if not name.startswith("_"): @@ -93,12 +90,13 @@ def __deepcopy__(self, _): return self -_kernel_options = Options([ - ("kernel_name", ("""The name of the kernel in the code.""", "string")), - ( - "kernel_source", +_kernel_options = Options( + [ + ("kernel_name", ("""The name of the kernel in the code.""", "string")), ( - """The CUDA, OpenCL, HIP, or C kernel code. + "kernel_source", + ( + """The CUDA, OpenCL, HIP, or C kernel code. It is allowed for the code to be passed as a string, a filename, a function that returns a string of code, or a list when the code needs auxilliary files. @@ -115,23 +113,23 @@ def __deepcopy__(self, _): which will be used to pass a dict containing the parameters. The function should return a string with the source code for the kernel.""", - "string or list and/or callable", + "string or list and/or callable", + ), ), - ), - ( - "lang", ( - """Specifies the language used for GPU kernels. The kernel_tuner + "lang", + ( + """Specifies the language used for GPU kernels. The kernel_tuner automatically detects the language, but if it fails, you may specify the language using this argument, currently supported: "CUDA", "Cupy", "OpenCL", "HIP", or "C".""", - "string", + "string", + ), ), - ), - ( - "problem_size", ( - """The size of the domain from which the grid dimensions + "problem_size", + ( + """The size of the domain from which the grid dimensions of the kernel are computed. This can be specified using an int, string, function, or @@ -164,21 +162,21 @@ def __deepcopy__(self, _): different dimensions. See the reduction CUDA example for an example use of this feature.""", - "callable, string, int, or tuple(int or string, ..)", + "callable, string, int, or tuple(int or string, ..)", + ), ), - ), - ( - "arguments", ( - """A list of kernel arguments, use numpy arrays for + "arguments", + ( + """A list of kernel arguments, use numpy arrays for arrays, use numpy.int32 or numpy.float32 for scalars.""", - "list", + "list", + ), ), - ), - ( - "grid_div_x", ( - """A list of names of the parameters whose values divide + "grid_div_x", + ( + """A list of names of the parameters whose values divide the grid dimensions in the x-direction. The product of all grid divisor expressions is computed before dividing the problem_size in that dimension. Also note that the divison is treated @@ -196,56 +194,56 @@ def __deepcopy__(self, _): If not supplied, ["block_size_x"] will be used by default, if you do not want any grid x-dimension divisors pass an empty list.""", - "callable or list", + "callable or list", + ), ), - ), - ( - "grid_div_y", ( - """A list of names of the parameters whose values divide + "grid_div_y", + ( + """A list of names of the parameters whose values divide the grid dimensions in the y-direction, ["block_size_y"] by default. If you do not want to divide the problem_size, you should pass an empty list. See grid_div_x for more details.""", - "list", + "list", + ), ), - ), - ( - "grid_div_z", ( - """A list of names of the parameters whose values divide + "grid_div_z", + ( + """A list of names of the parameters whose values divide the grid dimensions in the z-direction, ["block_size_z"] by default. If you do not want to divide the problem_size, you should pass an empty list. See grid_div_x for more details.""", - "list", + "list", + ), ), - ), - ( - "smem_args", ( - """CUDA-specific feature for specifying shared memory options + "smem_args", + ( + """CUDA-specific feature for specifying shared memory options to the kernel. At the moment only 'size' is supported, but setting the shared memory configuration on Kepler GPUs for example could be added in the future. Size should denote the number of bytes for to use when dynamically allocating shared memory.""", - "dict(string: numpy object)", + "dict(string: numpy object)", + ), ), - ), - ( - "cmem_args", ( - """CUDA-specific feature for specifying constant memory + "cmem_args", + ( + """CUDA-specific feature for specifying constant memory arguments to the kernel. In OpenCL these are handled as normal kernel arguments, but in CUDA you can copy to a symbol. The way you specify constant memory arguments is by passing a dictionary with strings containing the constant memory symbol name together with numpy objects in the same way as normal kernel arguments.""", - "dict(string: numpy object)", + "dict(string: numpy object)", + ), ), - ), - ( - "texmem_args", ( - """CUDA-specific feature for specifying texture memory + "texmem_args", + ( + """CUDA-specific feature for specifying texture memory arguments to the kernel. You specify texture memory arguments by passing a dictionary with strings containing the texture reference name together with the texture contents. These contents can be either simply a numpy object, @@ -253,35 +251,37 @@ def __deepcopy__(self, _): configuration options 'filter_mode' ('point' or 'linear), 'address_mode' (a list of 'border', 'clamp', 'mirror', 'wrap' per axis), 'normalized_coordinates' (True/False).""", - "dict(string: numpy object or dict)", + "dict(string: numpy object or dict)", + ), ), - ), - ( - "block_size_names", ( - """A list of strings that replace the defaults for the names + "block_size_names", + ( + """A list of strings that replace the defaults for the names that denote the thread block dimensions. If not passed, the behavior defaults to ``["block_size_x", "block_size_y", "block_size_z"]``""", - "list(string)", + "list(string)", + ), ), - ), - ( - "defines", ( - """A dictionary containing the preprocessor definitions inserted into + "defines", + ( + """A dictionary containing the preprocessor definitions inserted into the source code. The keys should the definition names and each value should be either a string or a function that returns a string. If an emtpy dictionary is passed, no definitions are inserted. If None is passed, each tunable parameter is inserted as a preprocessor definition.""", - "dict", + "dict", + ), ), - ), -]) + ] +) -_tuning_options = Options([ - ( - "tune_params", +_tuning_options = Options( + [ ( - """A dictionary containing the parameter names as keys, + "tune_params", + ( + """A dictionary containing the parameter names as keys, and lists of possible parameter settings as values. Kernel Tuner will try to compile and benchmark all possible combinations of all possible values for all tuning parameters. @@ -301,13 +301,13 @@ def __deepcopy__(self, _): don't want the thread block dimensions to be compiled in, you may use the built-in variables blockDim.xyz in CUDA or the built-in function get_local_size() in OpenCL instead.""", - "dict( string : [...]", + "dict( string : [...]", + ), ), - ), - ( - "restrictions", ( - """An option to limit the search space with restrictions. + "restrictions", + ( + """An option to limit the search space with restrictions. The restrictions can be specified using a function or a list of strings. The function should take one argument, namely a dictionary with the tunable parameters of the kernel configuration, if the function returns @@ -321,34 +321,34 @@ def __deepcopy__(self, _): search to configurations where the block_size_x equals the product of block_size_y and tile_size_y. The default is None.""", - "callable or list(strings)", + "callable or list(strings)", + ), ), - ), - ( - "answer", ( - """A list of arguments, similar to what you pass to arguments, + "answer", + ( + """A list of arguments, similar to what you pass to arguments, that contains the expected output of the kernel after it has executed and contains None for each argument that is input-only. The expected output of the kernel will then be used to verify the correctness of each kernel in the parameter space before it will be benchmarked.""", - "list", + "list", + ), ), - ), - ( - "atol", ( - """The maximum allowed absolute difference between two elements + "atol", + ( + """The maximum allowed absolute difference between two elements in the output and the reference answer, as passed to numpy.allclose(). Ignored if you have not passed a reference answer. Default value is 1e-6, that is 0.000001.""", - "float", + "float", + ), ), - ), - ( - "verify", ( - """Python function used for output verification. By default, + "verify", + ( + """Python function used for output verification. By default, numpy.allclose is used for output verification, if this does not suit your application, you can pass a different function here. @@ -360,13 +360,13 @@ def __deepcopy__(self, _): passed that was specified using the atol option to tune_kernel. The function should return True when the output passes the test, and False when the output fails the test.""", - "func(ref, ans, atol=None)", + "func(ref, ans, atol=None)", + ), ), - ), - ( - "strategy", ( - """Specify the strategy to use for searching through the + "strategy", + ( + """Specify the strategy to use for searching through the parameter space, choose from: * "basinhopping" Basin Hopping @@ -388,13 +388,13 @@ def __deepcopy__(self, _): Strategy-specific parameters and options are explained under strategy_options. """, - "", + "", + ), ), - ), - ( - "strategy_options", ( - """A dict with options specific to the selected tuning strategy. + "strategy_options", + ( + """A dict with options specific to the selected tuning strategy. All strategies support the following two options: @@ -408,38 +408,38 @@ def __deepcopy__(self, _): Strategy specific options are explained in :ref:`optimizations`. """, - "dict", + "dict", + ), ), - ), - ( - "iterations", ( - """The number of times a kernel should be executed and + "iterations", + ( + """The number of times a kernel should be executed and its execution time measured when benchmarking a kernel, 7 by default.""", - "int", + "int", + ), ), - ), - ( - "objective", ( - """Optimization objective to sort results on, consisting of a string + "objective", + ( + """Optimization objective to sort results on, consisting of a string that also occurs in results as a metric or observed quantity, default 'time'. Please see :ref:`objectives`.""", - "string", + "string", + ), ), - ), - ( - "objective_higher_is_better", ( - """boolean that specifies whether the objective should + "objective_higher_is_better", + ( + """boolean that specifies whether the objective should be maximized (True) or minimized (False), default False.""", - "bool", + "bool", + ), ), - ), - ( - "verbose", ( - """Sets whether or not to report about configurations that + "verbose", + ( + """Sets whether or not to report about configurations that were skipped during the search. This could be due to several reasons: * kernel configuration fails one or more restrictions @@ -448,69 +448,72 @@ def __deepcopy__(self, _): * too many resources requested for launch verbose is False by default.""", - "bool", + "bool", + ), ), - ), - ( - "cache", ( - """Filename for the cache to persistently store benchmarked configurations. + "cache", + ( + """Filename for the cache to persistently store benchmarked configurations. Filename uses suffix ".json", which is appended if missing. If the file exists, it is read and tuning continues from this file. Please see :ref:`cache`. """, - "string", + "string", + ), ), - ), - ("metrics", ("specifies user-defined metrics, please see :ref:`metrics`.", "OrderedDict")), - ("simulation_mode", ("Simulate an auto-tuning search from an existing cachefile", "bool")), - ("observers", ("""A list of Observers to use during tuning, please see :ref:`observers`.""", "list")), -]) - -_device_options = Options([ - ( - "device", + ("metrics", ("specifies user-defined metrics, please see :ref:`metrics`.", "dict")), + ("simulation_mode", ("Simulate an auto-tuning search from an existing cachefile", "bool")), + ("observers", ("""A list of Observers to use during tuning, please see :ref:`observers`.""", "list")), + ] +) + +_device_options = Options( + [ ( - """CUDA/OpenCL device to use, in case you have multiple + "device", + ( + """CUDA/OpenCL device to use, in case you have multiple CUDA-capable GPUs or OpenCL devices you may use this to select one, 0 by default. Ignored if you are tuning host code by passing lang="C".""", - "int", + "int", + ), ), - ), - ( - "platform", ( - """OpenCL platform to use, in case you have multiple + "platform", + ( + """OpenCL platform to use, in case you have multiple OpenCL platforms you may use this to select one, 0 by default. Ignored if not using OpenCL. """, - "int", + "int", + ), ), - ), - ( - "quiet", ( - """Control whether or not to print to the console which + "quiet", + ( + """Control whether or not to print to the console which device is being used, False by default""", - "boolean", + "boolean", + ), ), - ), - ( - "compiler", ( - """A string containing your preferred compiler, + "compiler", + ( + """A string containing your preferred compiler, only effective with lang="C". """, - "string", + "string", + ), ), - ), - ( - "compiler_options", ( - """A list of strings that specify compiler + "compiler_options", + ( + """A list of strings that specify compiler options.""", - "list(string)", + "list(string)", + ), ), - ), -]) + ] +) def _get_docstring(opts): @@ -521,7 +524,8 @@ def _get_docstring(opts): return docstr -_tune_kernel_docstring = (""" Tune a CUDA kernel given a set of tunable parameters +_tune_kernel_docstring = ( + """ Tune a CUDA kernel given a set of tunable parameters %s @@ -531,7 +535,11 @@ def _get_docstring(opts): version info, and so on. :rtype: list(dict()), dict() -""" % _get_docstring(_kernel_options) + _get_docstring(_tuning_options) + _get_docstring(_device_options)) +""" + % _get_docstring(_kernel_options) + + _get_docstring(_tuning_options) + + _get_docstring(_device_options) +) def tune_kernel( @@ -582,7 +590,7 @@ def tune_kernel( objective, objective_higher_is_better = get_objective_defaults(objective, objective_higher_is_better) # check for forbidden names in tune parameters - util.check_tune_params_list(tune_params, observers) + util.check_tune_params_list(tune_params, observers, simulation_mode=simulation_mode) # check whether block_size_names are used as expected util.check_block_size_params_names_list(block_size_names, tune_params) @@ -590,10 +598,6 @@ def tune_kernel( # ensure there is always at least three names util.append_default_block_size_names(block_size_names) - # if there are string in the restrictions, parse them to functions (increases restrictions check performance significantly) - if isinstance(restrictions, list) and len(restrictions) > 0 and any(isinstance(restriction, str) for restriction in restrictions): - restrictions = util.compile_restrictions(restrictions, tune_params) - if iterations < 1: raise ValueError("Iterations should be at least one!") @@ -626,13 +630,15 @@ def tune_kernel( # select strategy based on user options if "fraction" in tuning_options.strategy_options and not tuning_options.strategy == "random_sample": - raise ValueError('It is not possible to use fraction in combination with strategies other than "random_sample". ' - 'Please set strategy="random_sample", when using "fraction" in strategy_options') + raise ValueError( + 'It is not possible to use fraction in combination with strategies other than "random_sample". ' + 'Please set strategy="random_sample", when using "fraction" in strategy_options' + ) # check if method is supported by the selected strategy if "method" in tuning_options.strategy_options: method = tuning_options.strategy_options.method - if not method in strategy.supported_methods: + if method not in strategy.supported_methods: raise ValueError("Method %s is not supported for strategy %s" % (method, tuning_options.strategy)) # if no strategy_options dict has been passed, create empty dictionary @@ -672,7 +678,7 @@ def tune_kernel( # finished iterating over search space if not device_options.quiet: - if results: # checks if results is not empty + if results: # checks if results is not empty best_config = util.get_best_config(results, objective, objective_higher_is_better) units = getattr(runner, "units", None) print("best performing configuration:") @@ -717,7 +723,11 @@ def tune_kernel( :returns: A list of numpy arrays, similar to the arguments passed to this function, containing the output after kernel execution. :rtype: list -""" % _get_docstring(_kernel_options) + _get_docstring(_device_options) +""" % _get_docstring( + _kernel_options +) + _get_docstring( + _device_options +) def run_kernel( @@ -742,7 +752,6 @@ def run_kernel( quiet=False, log=None, ): - if log: logging.basicConfig(filename=kernel_name + datetime.now().strftime("%Y%m%d-%H:%M:%S") + ".log", level=log) diff --git a/kernel_tuner/observers/hip.py b/kernel_tuner/observers/hip.py index 72a3cb4fe..f789462e3 100644 --- a/kernel_tuner/observers/hip.py +++ b/kernel_tuner/observers/hip.py @@ -5,15 +5,17 @@ try: from pyhip import hip, hiprtc except ImportError: - print("Not able to import pyhip, check if PYTHONPATH includes PyHIP") hip = None hiprtc = None class HipRuntimeObserver(BenchmarkObserver): - """Observer that measures time using CUDA events during benchmarking""" + """Observer that measures time using CUDA events during benchmarking.""" def __init__(self, dev): + if not hip or not hiprtc: + raise ImportError("Unable to import PyHIP, make sure PYTHONPATH includes PyHIP, or check https://kerneltuner.github.io/kernel_tuner/stable/install.html#hip-and-pyhip.") + self.dev = dev self.stream = dev.stream self.start = dev.start diff --git a/kernel_tuner/observers/nvml.py b/kernel_tuner/observers/nvml.py index 2a5abd3b0..17fa8b06b 100644 --- a/kernel_tuner/observers/nvml.py +++ b/kernel_tuner/observers/nvml.py @@ -1,8 +1,8 @@ +import re import subprocess import time -import re + import numpy as np -from collections import OrderedDict from kernel_tuner.observers.observer import BenchmarkObserver, ContinuousObserver @@ -13,13 +13,12 @@ class nvml: - """Class that gathers the NVML functionality for one device""" + """Class that gathers the NVML functionality for one device.""" def __init__( self, device_id=0, nvidia_smi_fallback="nvidia-smi", use_locked_clocks=False ): - """Create object to control device using NVML""" - + """Create object to control device using NVML.""" pynvml.nvmlInit() self.dev = pynvml.nvmlDeviceGetHandleByIndex(device_id) self.id = device_id @@ -94,12 +93,12 @@ def __del__(self): @property def pwr_state(self): - """Get the Device current Power State""" + """Get the Device current Power State.""" return pynvml.nvmlDeviceGetPowerState(self.dev) @property def pwr_limit(self): - """Control the power limit (may require permission), check pwr_constraints for the allowed range""" + """Control the power limit (may require permission), check pwr_constraints for the allowed range.""" return pynvml.nvmlDeviceGetPowerManagementLimit(self.dev) @pwr_limit.setter @@ -127,12 +126,12 @@ def pwr_limit(self, new_limit): @property def persistence_mode(self): - """Control persistence mode (may require permission), 0 for disabled, 1 for enabled""" + """Control persistence mode (may require permission), 0 for disabled, 1 for enabled.""" return self._persistence_mode @persistence_mode.setter def persistence_mode(self, new_mode): - if not new_mode in [0, 1]: + if new_mode not in [0, 1]: raise ValueError( "Illegal value for persistence mode, should be either 0 or 1" ) @@ -140,11 +139,11 @@ def persistence_mode(self, new_mode): self._persistence_mode = pynvml.nvmlDeviceGetPersistenceMode(self.dev) def set_clocks(self, mem_clock, gr_clock): - """Set the memory and graphics clock for this device (may require permission)""" + """Set the memory and graphics clock for this device (may require permission).""" self.modified_clocks = True - if not mem_clock in self.supported_mem_clocks: + if mem_clock not in self.supported_mem_clocks: raise ValueError("Illegal value for memory clock") - if not gr_clock in self.supported_gr_clocks[mem_clock]: + if gr_clock not in self.supported_gr_clocks[mem_clock]: raise ValueError("Graphics clock incompatible with memory clock") if self.use_locked_clocks: try: @@ -183,7 +182,7 @@ def set_clocks(self, mem_clock, gr_clock): subprocess.run(args, check=True) def reset_clocks(self): - """Reset the clocks to the default clock if the device uses a non default clock""" + """Reset the clocks to the default clock if the device uses a non default clock.""" if self.use_locked_clocks: try: pynvml.nvmlDeviceResetGpuLockedClocks(self.dev) @@ -222,7 +221,7 @@ def reset_clocks(self): @property def gr_clock(self): - """Control the graphics clock (may require permission), only values compatible with the memory clock can be set directly""" + """Control the graphics clock (may require permission), only values compatible with the memory clock can be set directly.""" return pynvml.nvmlDeviceGetClockInfo(self.dev, pynvml.NVML_CLOCK_GRAPHICS) @gr_clock.setter @@ -239,7 +238,7 @@ def gr_clock(self, new_clock): @property def mem_clock(self): - """Control the memory clock (may require permission), only values compatible with the graphics clock can be set directly""" + """Control the memory clock (may require permission), only values compatible with the graphics clock can be set directly.""" if self.use_locked_clocks: # nvmlDeviceGetClock returns slightly different values than nvmlDeviceGetSupportedMemoryClocks, # therefore set mem_clock to the closest supported value @@ -262,18 +261,18 @@ def mem_clock(self, new_clock): @property def temperature(self): - """Get the GPU temperature""" + """Get the GPU temperature.""" return pynvml.nvmlDeviceGetTemperature(self.dev, pynvml.NVML_TEMPERATURE_GPU) @property def auto_boost(self): - """Control the auto boost setting (may require permission), 0 for disable, 1 for enabled""" + """Control the auto boost setting (may require permission), 0 for disable, 1 for enabled.""" return self._auto_boost @auto_boost.setter def auto_boost(self, setting): # might need to use pynvml.NVML_FEATURE_DISABLED or pynvml.NVML_FEATURE_ENABLED instead of 0 or 1 - if not setting in [0, 1]: + if setting not in [0, 1]: raise ValueError( "Illegal value for auto boost enabled, should be either 0 or 1" ) @@ -281,11 +280,11 @@ def auto_boost(self, setting): self._auto_boost = pynvml.nvmlDeviceGetAutoBoostedClocksEnabled(self.dev)[0] def pwr_usage(self): - """Return current power usage in milliwatts""" + """Return current power usage in milliwatts.""" return pynvml.nvmlDeviceGetPowerUsage(self.dev) def gr_voltage(self): - """Return current graphics voltage in millivolts""" + """Return current graphics voltage in millivolts.""" args = ["nvidia-smi", "-i", str(self.id), "-q", "-d", "VOLTAGE"] try: result = subprocess.run(args, check=True, capture_output=True) @@ -296,7 +295,7 @@ def gr_voltage(self): class NVMLObserver(BenchmarkObserver): - """Observer that uses NVML to monitor power, energy, clock frequencies, voltages and temperature + """Observer that uses NVML to monitor power, energy, clock frequencies, voltages and temperature. The NVMLObserver can also be used to tune application-specific clock frequencies or power limits in combination with other parameters. @@ -338,12 +337,7 @@ def __init__( use_locked_clocks=False, continous_duration=1, ): - """ - - Create an NVMLObserver. - - - """ + """Create an NVMLObserver.""" if nvidia_smi_fallback: self.nvml = nvml( device, @@ -364,7 +358,7 @@ def __init__( "gr_voltage", ] for obs in observables: - if not obs in supported: + if obs not in supported: raise ValueError(f"Observable {obs} not in supported: {supported}") self.observables = observables @@ -461,7 +455,7 @@ def get_results(self): class NVMLPowerObserver(ContinuousObserver): - """Observer that measures power using NVML and continuous benchmarking""" + """Observer that measures power using NVML and continuous benchmarking.""" def __init__(self, observables, parent, nvml_instance, continous_duration=1): self.parent = parent @@ -534,8 +528,7 @@ def get_results(self): def get_nvml_pwr_limits(device, n=None, quiet=False): - """Get tunable parameter for NVML power limits, n is desired number of values""" - + """Get tunable parameter for NVML power limits, n is desired number of values.""" d = nvml(device) power_limits = d.pwr_constraints power_limit_min = power_limits[0] @@ -544,8 +537,8 @@ def get_nvml_pwr_limits(device, n=None, quiet=False): power_limit_min *= 1e-3 power_limit_max *= 1e-3 power_limit_round = 5 - tune_params = OrderedDict() - if n == None: + tune_params = dict() + if n is None: n = int((power_limit_max - power_limit_min) / power_limit_round) + 1 # Rounded power limit values @@ -561,8 +554,7 @@ def get_nvml_pwr_limits(device, n=None, quiet=False): def get_nvml_gr_clocks(device, n=None, quiet=False): - """Get tunable parameter for NVML graphics clock, n is desired number of values""" - + """Get tunable parameter for NVML graphics clock, n is desired number of values.""" d = nvml(device) mem_clock = max(d.supported_mem_clocks) gr_clocks = d.supported_gr_clocks[mem_clock] @@ -571,7 +563,7 @@ def get_nvml_gr_clocks(device, n=None, quiet=False): indices = np.array(np.ceil(np.linspace(0, len(gr_clocks) - 1, n)), dtype=int) gr_clocks = np.array(gr_clocks)[indices] - tune_params = OrderedDict() + tune_params = dict() tune_params["nvml_gr_clock"] = list(gr_clocks) if not quiet: @@ -580,15 +572,14 @@ def get_nvml_gr_clocks(device, n=None, quiet=False): def get_nvml_mem_clocks(device, n=None, quiet=False): - """Get tunable parameter for NVML memory clock, n is desired number of values""" - + """Get tunable parameter for NVML memory clock, n is desired number of values.""" d = nvml(device) mem_clocks = d.supported_mem_clocks if n and len(mem_clocks) > n: mem_clocks = mem_clocks[:: int(len(mem_clocks) / n)] - tune_params = OrderedDict() + tune_params = dict() tune_params["nvml_mem_clock"] = mem_clocks if not quiet: @@ -597,7 +588,7 @@ def get_nvml_mem_clocks(device, n=None, quiet=False): def get_idle_power(device, n=5, sleep_s=0.1): - """Use NVML to measure device idle power consumption""" + """Use NVML to measure device idle power consumption.""" d = nvml(device) readings = [] for _ in range(n): diff --git a/kernel_tuner/runners/sequential.py b/kernel_tuner/runners/sequential.py index 352a8321e..c493a0089 100644 --- a/kernel_tuner/runners/sequential.py +++ b/kernel_tuner/runners/sequential.py @@ -1,20 +1,18 @@ -""" The default runner for sequentially tuning the parameter space """ +"""The default runner for sequentially tuning the parameter space.""" import logging -from collections import OrderedDict from datetime import datetime, timezone from time import perf_counter from kernel_tuner.core import DeviceInterface -from kernel_tuner.util import (ErrorConfig, print_config_output, - process_metrics, store_cache) from kernel_tuner.runners.runner import Runner +from kernel_tuner.util import ErrorConfig, print_config_output, process_metrics, store_cache class SequentialRunner(Runner): - """ SequentialRunner is used for tuning with a single process/thread """ + """SequentialRunner is used for tuning with a single process/thread.""" def __init__(self, kernel_source, kernel_options, device_options, iterations, observers): - """ Instantiate the SequentialRunner + """Instantiate the SequentialRunner. :param kernel_source: The kernel source :type kernel_source: kernel_tuner.core.KernelSource @@ -30,7 +28,6 @@ def __init__(self, kernel_source, kernel_options, device_options, iterations, ob each kernel instance. :type iterations: int """ - #detect language and create high-level device interface self.dev = DeviceInterface(kernel_source, iterations=iterations, observers=observers, **device_options) @@ -51,7 +48,7 @@ def get_environment(self, tuning_options): return self.dev.get_environment() def run(self, parameter_space, tuning_options): - """ Iterate through the entire parameter space using a single Python process + """Iterate through the entire parameter space using a single Python process. :param parameter_space: The parameter space as an iterable. :type parameter_space: iterable @@ -71,7 +68,7 @@ def run(self, parameter_space, tuning_options): # iterate over parameter space for element in parameter_space: - params = OrderedDict(zip(tuning_options.tune_params.keys(), element)) + params = dict(zip(tuning_options.tune_params.keys(), element)) result = None warmup_time = 0 diff --git a/kernel_tuner/searchspace.py b/kernel_tuner/searchspace.py index 68bebf672..2f085adbe 100644 --- a/kernel_tuner/searchspace.py +++ b/kernel_tuner/searchspace.py @@ -1,18 +1,32 @@ +from __future__ import annotations + +import ast +import re +from pathlib import Path from random import choice, shuffle -from typing import Tuple, List +from typing import List -from constraint import Problem, Constraint, FunctionConstraint import numpy as np +from constraint import ( + BacktrackingSolver, + Constraint, + FunctionConstraint, + MaxProdConstraint, + MinConflictsSolver, + OptimizedBacktrackingSolver, + Problem, + RecursiveBacktrackingSolver, + Solver, +) -from kernel_tuner.util import default_block_size_names from kernel_tuner.util import check_restrictions as check_instance_restrictions -from kernel_tuner.util import MaxProdConstraint +from kernel_tuner.util import compile_restrictions, default_block_size_names supported_neighbor_methods = ["strictly-adjacent", "adjacent", "Hamming"] class Searchspace: - """Class that offers the search space to strategies""" + """Class that provides the search space to strategies.""" def __init__( self, @@ -22,34 +36,71 @@ def __init__( block_size_names=default_block_size_names, build_neighbors_index=False, neighbor_method=None, + framework="PythonConstraint", + solver_method="PC_OptimizedBacktrackingSolver", + path_to_ATF_cache: Path = None, ) -> None: """Build a searchspace using the variables and constraints. + Optionally build the neighbors index - only faster if you repeatedly look up neighbors. Methods: strictly-adjacent: differs +1 or -1 parameter index value for each parameter adjacent: picks closest parameter value in both directions for each parameter Hamming: any parameter config with 1 different parameter value is a neighbor Optionally sort the searchspace by the order in which the parameter values were specified. By default, sort goes from first to last parameter, to reverse this use sort_last_param_first. """ + # set the object attributes using the arguments + restrictions = restrictions if restrictions is not None else [] self.tune_params = tune_params self.restrictions = restrictions self.param_names = list(self.tune_params.keys()) - self.params_values = tuple( - tuple(param_vals) for param_vals in self.tune_params.values() - ) + self.params_values = tuple(tuple(param_vals) for param_vals in self.tune_params.values()) self.params_values_indices = None self.build_neighbors_index = build_neighbors_index self.__neighbor_cache = dict() self.neighbor_method = neighbor_method + if (neighbor_method is not None or build_neighbors_index) and neighbor_method not in supported_neighbor_methods: + raise ValueError(f"Neighbor method is {neighbor_method}, must be one of {supported_neighbor_methods}") + + # if there are strings in the restrictions, parse them to split constraints or functions (improves solver performance) + restrictions = [restrictions] if not isinstance(restrictions, list) else restrictions if ( - neighbor_method is not None or build_neighbors_index - ) and neighbor_method not in supported_neighbor_methods: - raise ValueError( - f"Neighbor method is {neighbor_method}, must be one of {supported_neighbor_methods}" + len(restrictions) > 0 + and any(isinstance(restriction, str) for restriction in restrictions) + and not (framework.lower() == "pysmt" or framework.lower() == "bruteforce") + ): + self.restrictions = compile_restrictions( + restrictions, tune_params, monolithic=False, try_to_constraint=framework.lower() == "pythonconstraint" ) - self.list, self.__numpy, self.__dict, self.size = self.__build_searchspace( - block_size_names, max_threads - ) + # get the framework given the framework argument + if framework.lower() == "pythonconstraint": + searchspace_builder = self.__build_searchspace + elif framework.lower() == "pysmt": + searchspace_builder = self.__build_searchspace_pysmt + elif framework.lower() == "atf_cache": + searchspace_builder = self.__build_searchspace_ATF_cache + self.path_to_ATF_cache = path_to_ATF_cache + elif framework.lower() == "bruteforce": + searchspace_builder = self.__build_searchspace_bruteforce + else: + raise ValueError(f"Invalid framework parameter {framework}") + + # get the solver given the solver method argument + solver = "" + if solver_method.lower() == "pc_backtrackingsolver": + solver = BacktrackingSolver() + elif solver_method.lower() == "pc_optimizedbacktrackingsolver": + solver = OptimizedBacktrackingSolver(forwardcheck=False) + elif solver_method.lower() == "pc_recursivebacktrackingsolver": + solver = RecursiveBacktrackingSolver() + elif solver_method.lower() == "pc_minconflictssolver": + solver = MinConflictsSolver() + else: + raise ValueError(f"Solver method {solver_method} not recognized.") + + # build the search space + self.list, self.__dict, self.size = searchspace_builder(block_size_names, max_threads, solver) + self.__numpy = None self.num_params = len(self.tune_params) self.indices = np.arange(self.size) if neighbor_method is not None and neighbor_method != "Hamming": @@ -57,100 +108,352 @@ def __init__( if build_neighbors_index: self.neighbors_index = self.__build_neighbors_index(neighbor_method) - def __build_searchspace( - self, block_size_names: list, max_threads: int - ) -> Tuple[List[tuple], np.ndarray, dict, int]: - """compute valid configurations in a search space based on restrictions and max_threads, returns the searchspace, a dict of the searchspace for fast lookups and the size""" + # def __build_searchspace_ortools(self, block_size_names: list, max_threads: int) -> Tuple[List[tuple], np.ndarray, dict, int]: + # # Based on https://developers.google.com/optimization/cp/cp_solver#python_2 + # from ortools.sat.python import cp_model - # instantiate the parameter space with all the variables - parameter_space = Problem() - for param_name, param_values in self.tune_params.items(): - parameter_space.addVariable(param_name, param_values) + # # instantiate the parameter space with all the variables + # parameter_space = cp_model.CpModel() + # for param_name, param_values in self.tune_params.items(): + # parameter_space.NewIntervalVar(min(param_values), ) + # parameter_space.addVariable(param_name, param_values) - # add the user-specified restrictions as constraints on the parameter space - parameter_space = self.__add_restrictions(parameter_space) + # def __build_searchspace_cpmpy(): + # # Based on examples in https://github.com/CPMpy/cpmpy/blob/master/examples/nqueens_1000.ipynb + # # possible solution for interrupted ranges with 'conso' in https://github.com/CPMpy/cpmpy/blob/master/examples/mario.py + # import cpmpy - # add the default blocksize threads restrictions last, because it is unlikely to reduce the parameter space by much - valid_block_size_names = list( - block_size_name - for block_size_name in block_size_names - if block_size_name in self.param_names - ) - if len(valid_block_size_names) > 0: - parameter_space.addConstraint( - MaxProdConstraint(max_threads), valid_block_size_names - ) + # cpmpy.intvar() - # construct the parameter space with the constraints applied - parameter_space = parameter_space.getSolutions() + # def __build_searchspace_pycsp(self, block_size_names: list, max_threads: int): + # import pycsp3 as csp - # form the parameter tuples in the order specified by tune_params.keys() - parameter_space_list = list( - (tuple(params[param_name] for param_name in self.param_names)) - for params in parameter_space - ) + # # instantiate the parameter space with all the variables + # vars_and_constraints = list() + # for param_name, param_values in self.tune_params.items(): + # var = csp.Var(param_values, id=param_name) + # vars_and_constraints.append(var) - # create a numpy array of the search space - # in order to have the tuples as tuples in numpy, the types are set with a string, but this will make the type np.void - # type_string = ",".join(list(type(param).__name__ for param in parameter_space_list[0])) - parameter_space_numpy = np.array(parameter_space_list) + # # construct the parameter space with the constraints applied + # csp.satisfy(*vars_and_constraints) - # create a dictionary with the hashed parameter configurations as keys and indices as values for fast lookups - parameter_space_dict = dict( - zip(parameter_space_list, list(range(parameter_space_numpy.size))) + # # solve for all configurations to get the feasible region + # if csp.solve(sols=csp.ALL) is csp.SAT: + # num_solutions: int = csp.n_solutions() # number of solutions + # solutions = [csp.values(sol=i) for i in range(num_solutions)] # list of solutions + + def __build_searchspace_bruteforce(self, block_size_names: list, max_threads: int, solver = None): + # bruteforce solving of the searchspace + + from itertools import product + + from kernel_tuner.util import check_restrictions + + tune_params = self.tune_params + restrictions = self.restrictions + + # compute cartesian product of all tunable parameters + parameter_space = product(*tune_params.values()) + + # check if there are block sizes in the parameters, if so add default restrictions + used_block_size_names = list( + block_size_name for block_size_name in default_block_size_names if block_size_name in tune_params ) + if len(used_block_size_names) > 0: + if not isinstance(restrictions, list): + restrictions = [restrictions] + block_size_restriction_spaced = f"{' * '.join(used_block_size_names)} <= {max_threads}" + block_size_restriction_unspaced = f"{'*'.join(used_block_size_names)} <= {max_threads}" + if block_size_restriction_spaced not in restrictions and block_size_restriction_unspaced not in restrictions: + restrictions.append(block_size_restriction_spaced) + + # check for search space restrictions + if restrictions is not None: + parameter_space = filter( + lambda p: check_restrictions(restrictions, dict(zip(tune_params.keys(), p)), False), parameter_space + ) - # check for duplicates - size_list = len(parameter_space_list) - size_dict = len(parameter_space_dict.keys()) - if size_list != size_dict: + # evaluate to a list + parameter_space = list(parameter_space) + + # return the results + return self.__parameter_space_list_to_lookup_and_return_type(parameter_space) + + def __build_searchspace_pysmt(self, block_size_names: list, max_threads: int, solver: Solver): + # PySMT imports + from pysmt.oracles import get_logic + from pysmt.shortcuts import And, Equals, EqualsOrIff, Not, Or, Real, Symbol + from pysmt.shortcuts import Solver as PySMTSolver + from pysmt.typing import REAL + + tune_params = self.tune_params + restrictions = self.restrictions + + # TODO implement block_size_names, max_threads + + def all_smt(formula, keys) -> list: + target_logic = get_logic(formula) + partial_models = list() + with PySMTSolver(logic=target_logic) as solver: + solver.add_assertion(formula) + while solver.solve(): + partial_model = [EqualsOrIff(k, solver.get_value(k)) for k in keys] + assertion = Not(And(partial_model)) + solver.add_assertion(assertion) + partial_models.append(partial_model) + return partial_models + + # setup each tunable parameter + symbols = dict([(v, Symbol(v, REAL)) for v in tune_params.keys()]) + # symbols = [Symbol(v, REAL) for v in tune_params.keys()] + + # for each tunable parameter, set the list of allowed values + domains = list() + for tune_param_key, tune_param_values in tune_params.items(): + domain = Or([Equals(symbols[tune_param_key], Real(float(val))) for val in tune_param_values]) + domains.append(domain) + domains = And(domains) + + # add the restrictions + problem = self.__parse_restrictions_pysmt(restrictions, tune_params, symbols) + + # combine the domain and restrictions + formula = And(domains, problem) + + # get all solutions + keys = list(symbols.values()) + all_solutions = all_smt(formula, keys) + + # get the values for the parameters + parameter_space_list = list() + for solution in all_solutions: + sol_dict = dict() + for param in solution: + param = str(param.serialize()).replace("(", "").replace(")", "") + key, value = param.split(" = ") + try: + value = ast.literal_eval(value) + except ValueError: + try: + value = eval(value) + except NameError: + pass + sol_dict[key] = value + parameter_space_list.append(tuple(sol_dict[param_name] for param_name in list(tune_params.keys()))) + + return self.__parameter_space_list_to_lookup_and_return_type(parameter_space_list) + + def __build_searchspace_ATF_cache(self, block_size_names: list, max_threads: int, solver: Solver): + """Imports the valid configurations from an ATF CSV file, returns the searchspace, a dict of the searchspace for fast lookups and the size.""" + if block_size_names != default_block_size_names or max_threads != 1024: raise ValueError( - f"{size_list - size_dict} duplicate parameter configurations in the searchspace, this should not happen" + "It is not possible to change 'block_size_names' or 'max_threads here, because at this point ATF has already ran.'" ) - + import pandas as pd + + try: + df = pd.read_csv(self.path_to_ATF_cache, sep=";") + list_of_tuples_of_parameters = list(zip(*(df[column] for column in self.param_names))) + except pd.errors.EmptyDataError: + list_of_tuples_of_parameters = list() + return self.__parameter_space_list_to_lookup_and_return_type(list_of_tuples_of_parameters) + + def __parameter_space_list_to_lookup_and_return_type( + self, parameter_space_list: list[tuple], validate=True + ) -> tuple[list[tuple], dict[tuple, int], int]: + """Returns a tuple of the searchspace as a list of tuples, a dict of the searchspace for fast lookups and the size.""" + parameter_space_dict = dict(zip(parameter_space_list, range(len(parameter_space_list)))) + if validate: + # check for duplicates + size_list = len(parameter_space_list) + size_dict = len(parameter_space_dict.keys()) + if size_list != size_dict: + raise ValueError( + f"{size_list - size_dict} duplicate parameter configurations in the searchspace, this should not happen." + ) return ( parameter_space_list, - parameter_space_numpy, parameter_space_dict, size_list, ) + def __build_searchspace(self, block_size_names: list, max_threads: int, solver: Solver): + """Compute valid configurations in a search space based on restrictions and max_threads.""" + # instantiate the parameter space with all the variables + parameter_space = Problem(solver=solver) + for param_name, param_values in self.tune_params.items(): + parameter_space.addVariable(str(param_name), param_values) + + # add the user-specified restrictions as constraints on the parameter space + parameter_space = self.__add_restrictions(parameter_space) + + # add the default blocksize threads restrictions last, because it is unlikely to reduce the parameter space by much + valid_block_size_names = list( + block_size_name for block_size_name in block_size_names if block_size_name in self.param_names + ) + if len(valid_block_size_names) > 0: + parameter_space.addConstraint(MaxProdConstraint(max_threads), valid_block_size_names) + + # construct the parameter space with the constraints applied + return parameter_space.getSolutionsAsListDict(order=self.param_names) + def __add_restrictions(self, parameter_space: Problem) -> Problem: - """add the user-specified restrictions as constraints on the parameter space""" + """Add the user-specified restrictions as constraints on the parameter space.""" if isinstance(self.restrictions, list): for restriction in self.restrictions: + required_params = self.param_names + if isinstance(restriction, tuple): + restriction, required_params = restriction if callable(restriction) and not isinstance(restriction, Constraint): restriction = FunctionConstraint(restriction) if isinstance(restriction, FunctionConstraint): - parameter_space.addConstraint(restriction, self.param_names) + parameter_space.addConstraint(restriction, required_params) elif isinstance(restriction, Constraint): - parameter_space.addConstraint(restriction) + all_params_required = all(param_name in required_params for param_name in self.param_names) + parameter_space.addConstraint( + restriction, + None if all_params_required else required_params + ) else: raise ValueError(f"Unrecognized restriction {restriction}") # if the restrictions are the old monolithic function, apply them directly (only for backwards compatibility, likely slower than well-specified constraints!) elif callable(self.restrictions): - restrictions_wrapper = lambda *args: check_instance_restrictions( - self.restrictions, dict(zip(self.param_names, args)), False - ) - parameter_space.addConstraint(restrictions_wrapper, self.param_names) + + def restrictions_wrapper(*args): + return check_instance_restrictions(self.restrictions, dict(zip(self.param_names, args)), False) + + parameter_space.addConstraint(FunctionConstraint(restrictions_wrapper), self.param_names) elif self.restrictions is not None: - raise ValueError( - f"The restrictions are of unsupported type {type(self.restrictions)}" - ) + raise ValueError(f"The restrictions are of unsupported type {type(self.restrictions)}") return parameter_space + def __parse_restrictions_pysmt(self, restrictions: list, tune_params: dict, symbols: dict): + """Parses restrictions from a list of strings into PySMT compatible restrictions.""" + from pysmt.shortcuts import ( + GE, + GT, + LE, + LT, + And, + Bool, + Div, + Equals, + Int, + Minus, + Or, + Plus, + Pow, + Real, + String, + Times, + ) + + regex_match_variable = r"([a-zA-Z_$][a-zA-Z_$0-9]*)" + + boolean_comparison_mapping = { + "==": Equals, + "<": LT, + "<=": LE, + ">=": GE, + ">": GT, + "&&": And, + "||": Or, + } + + operators_mapping = {"+": Plus, "-": Minus, "*": Times, "/": Div, "^": Pow} + + constant_init_mapping = { + "int": Int, + "float": Real, + "str": String, + "bool": Bool, + } + + def replace_params(match_object): + key = match_object.group(1) + if key in tune_params: + return 'params["' + key + '"]' + else: + return key + + # rewrite the restrictions so variables are singled out + parsed = [re.sub(regex_match_variable, replace_params, res) for res in restrictions] + # ensure no duplicates are in the list + parsed = list(set(parsed)) + # replace ' or ' and ' and ' with ' || ' and ' && ' + parsed = list(r.replace(" or ", " || ").replace(" and ", " && ") for r in parsed) + + # compile each restriction by replacing parameters and operators with their PySMT equivalent + compiled_restrictions = list() + for parsed_restriction in parsed: + words = parsed_restriction.split(" ") + + # make a forward pass over all the words to organize and substitute + add_next_var_or_constant = False + var_or_constant_backlog = list() + operator_backlog = list() + operator_backlog_left_right = list() + boolean_backlog = list() + for word in words: + if word.startswith("params["): + # if variable + varname = word.replace('params["', "").replace('"]', "") + var = symbols[varname] + var_or_constant_backlog.append(var) + elif word in boolean_comparison_mapping: + # if comparator + boolean_backlog.append(boolean_comparison_mapping[word]) + continue + elif word in operators_mapping: + # if operator + operator_backlog.append(operators_mapping[word]) + add_next_var_or_constant = True + continue + else: + # if constant: evaluate to check if it is an integer, float, etc. If not, treat it as a string. + try: + constant = ast.literal_eval(word) + except ValueError: + constant = word + # convert from Python type to PySMT equivalent + type_instance = constant_init_mapping[type(constant).__name__] + var_or_constant_backlog.append(type_instance(constant)) + if add_next_var_or_constant: + right, left = var_or_constant_backlog.pop(-1), var_or_constant_backlog.pop(-1) + operator_backlog_left_right.append((left, right, len(var_or_constant_backlog))) + add_next_var_or_constant = False + # reserve an empty spot for the combined operation to preserve the order + var_or_constant_backlog.append(None) + + # for each of the operators, instantiate them with variables or constants + for i, operator in enumerate(operator_backlog): + # merges the first two symbols in the backlog into one + left, right, new_index = operator_backlog_left_right[i] + assert ( + var_or_constant_backlog[new_index] is None + ) # make sure that this is a reserved spot to avoid changing the order + var_or_constant_backlog[new_index] = operator(left, right) + + # for each of the booleans, instantiate them with variables or constants + compiled = list() + assert len(boolean_backlog) <= 1, "Max. one boolean operator per restriction." + for boolean in boolean_backlog: + left, right = var_or_constant_backlog.pop(0), var_or_constant_backlog.pop(0) + compiled.append(boolean(left, right)) + + # add the restriction to the list of restrictions + compiled_restrictions.append(compiled[0]) + + return And(compiled_restrictions) + def sorted_list(self, sort_last_param_first=False): - """returns list of parameter configs sorted based on the order in which the parameter values were specified + """Returns list of parameter configs sorted based on the order in which the parameter values were specified. :param sort_last_param_first: By default, sort goes from first to last parameter, to reverse this use sort_last_param_first """ - params_values_indices = list( - self.get_param_indices(param_config) for param_config in self.list - ) - params_values_indices_dict = dict( - zip(params_values_indices, list(range(len(params_values_indices)))) - ) + params_values_indices = list(self.get_param_indices(param_config) for param_config in self.list) + params_values_indices_dict = dict(zip(params_values_indices, list(range(len(params_values_indices))))) # Python's built-in sort will sort starting in front, so if we want to vary the first parameter the tuple needs to be reversed if sort_last_param_first: @@ -160,76 +463,77 @@ def sorted_list(self, sort_last_param_first=False): # find the index of the parameter configuration for each parameter value index, using a dict to do it in constant time new_order = [ - params_values_indices_dict.get(param_values_indices) - for param_values_indices in params_values_indices + params_values_indices_dict.get(param_values_indices) for param_values_indices in params_values_indices ] # apply the new order return [self.list[i] for i in new_order] def is_param_config_valid(self, param_config: tuple) -> bool: - """returns whether the parameter config is valid (i.e. is in the searchspace after restrictions)""" + """Returns whether the parameter config is valid (i.e. is in the searchspace after restrictions).""" return self.get_param_config_index(param_config) is not None def get_list_dict(self) -> dict: - """get the internal dictionary""" + """Get the internal dictionary.""" return self.__dict + def get_list_numpy(self) -> np.ndarray: + """Get the parameter space list as a NumPy array. Initializes the NumPy array if not yet done. + + Returns: + the NumPy array. + """ + if self.__numpy is None: + # create a numpy array of the search space + # in order to have the tuples as tuples in numpy, the types are set with a string, but this will make the type np.void + # type_string = ",".join(list(type(param).__name__ for param in parameter_space_list[0])) + self.__numpy = np.array(self.list) + return self.__numpy + def get_param_indices(self, param_config: tuple) -> tuple: - """for each parameter value in the param config, find the index in the tunable parameters""" - return tuple( - self.params_values[index].index(param_value) - for index, param_value in enumerate(param_config) - ) + """For each parameter value in the param config, find the index in the tunable parameters.""" + return tuple(self.params_values[index].index(param_value) for index, param_value in enumerate(param_config)) def get_param_configs_at_indices(self, indices: List[int]) -> List[tuple]: - """Get the param configs at the given indices""" + """Get the param configs at the given indices.""" # map(get) is ~40% faster than numpy[indices] (average based on six searchspaces with 10000, 100000 and 1000000 configs and 10 or 100 random indices) return list(map(self.list.__getitem__, indices)) def get_param_config_index(self, param_config: tuple): - """Lookup the index for a parameter configuration, returns None if not found""" + """Lookup the index for a parameter configuration, returns None if not found.""" # constant time O(1) access - much faster than any other method, but needs a shadow dict of the search space return self.__dict.get(param_config, None) def __prepare_neighbors_index(self): - """prepare by calculating the indices for the individual parameters""" - self.params_values_indices = np.array( - list(self.get_param_indices(param_config) for param_config in self.list) - ) + """Prepare by calculating the indices for the individual parameters.""" + self.params_values_indices = np.array(list(self.get_param_indices(param_config) for param_config in self.list)) def __get_neighbors_indices_hamming(self, param_config: tuple) -> List[int]: - """get the neighbors using Hamming distance from the parameter configuration""" - num_matching_params = np.count_nonzero(self.__numpy == param_config, -1) + """Get the neighbors using Hamming distance from the parameter configuration.""" + num_matching_params = np.count_nonzero(self.get_list_numpy() == param_config, -1) matching_indices = (num_matching_params == self.num_params - 1).nonzero()[0] return matching_indices def __get_neighbors_indices_strictlyadjacent( self, param_config_index: int = None, param_config: tuple = None ) -> List[int]: - """get the neighbors using strictly adjacent distance from the parameter configuration (parameter index absolute difference == 1)""" + """Get the neighbors using strictly adjacent distance from the parameter configuration (parameter index absolute difference == 1).""" param_config_value_indices = ( self.get_param_indices(param_config) if param_config_index is None else self.params_values_indices[param_config_index] ) # calculate the absolute difference between the parameter value indices - abs_index_difference = np.abs( - self.params_values_indices - param_config_value_indices - ) + abs_index_difference = np.abs(self.params_values_indices - param_config_value_indices) # get the param config indices where the difference is one or less for each position matching_indices = (np.max(abs_index_difference, axis=1) <= 1).nonzero()[0] # as the selected param config does not differ anywhere, remove it from the matches if param_config_index is not None: - matching_indices = np.setdiff1d( - matching_indices, [param_config_index], assume_unique=False - ) + matching_indices = np.setdiff1d(matching_indices, [param_config_index], assume_unique=False) return matching_indices - def __get_neighbors_indices_adjacent( - self, param_config_index: int = None, param_config: tuple = None - ) -> List[int]: - """get the neighbors using adjacent distance from the parameter configuration (parameter index absolute difference >= 1)""" + def __get_neighbors_indices_adjacent(self, param_config_index: int = None, param_config: tuple = None) -> List[int]: + """Get the neighbors using adjacent distance from the parameter configuration (parameter index absolute difference >= 1).""" param_config_value_indices = ( self.get_param_indices(param_config) if param_config_index is None @@ -243,54 +547,39 @@ def __get_neighbors_indices_adjacent( # np.PINF has been replaced by 1e12 here, as on some systems np.PINF becomes np.NINF upper_bound = tuple( np.min( - index_difference_transposed[p][ - (index_difference_transposed[p] > 0).nonzero() - ], + index_difference_transposed[p][(index_difference_transposed[p] > 0).nonzero()], initial=1e12, ) for p in range(self.num_params) ) lower_bound = tuple( np.max( - index_difference_transposed[p][ - (index_difference_transposed[p] < 0).nonzero() - ], + index_difference_transposed[p][(index_difference_transposed[p] < 0).nonzero()], initial=-1e12, ) for p in range(self.num_params) ) # return the indices where each parameter is within bounds matching_indices = ( - np.logical_and( - index_difference <= upper_bound, index_difference >= lower_bound - ) - .all(axis=1) - .nonzero()[0] + np.logical_and(index_difference <= upper_bound, index_difference >= lower_bound).all(axis=1).nonzero()[0] ) # as the selected param config does not differ anywhere, remove it from the matches if param_config_index is not None: - matching_indices = np.setdiff1d( - matching_indices, [param_config_index], assume_unique=False - ) + matching_indices = np.setdiff1d(matching_indices, [param_config_index], assume_unique=False) return matching_indices def __build_neighbors_index(self, neighbor_method) -> List[List[int]]: - """build an index of the neighbors for each parameter configuration""" + """Build an index of the neighbors for each parameter configuration.""" # for Hamming no preperation is necessary, find the neighboring parameter configurations if neighbor_method == "Hamming": - return list( - self.__get_neighbors_indices_hamming(param_config) - for param_config in self.list - ) + return list(self.__get_neighbors_indices_hamming(param_config) for param_config in self.list) # for each parameter configuration, find the neighboring parameter configurations if self.params_values_indices is None: self.__prepare_neighbors_index() if neighbor_method == "strictly-adjacent": return list( - self.__get_neighbors_indices_strictlyadjacent( - param_config_index, param_config - ) + self.__get_neighbors_indices_strictlyadjacent(param_config_index, param_config) for param_config_index, param_config in enumerate(self.list) ) @@ -300,12 +589,10 @@ def __build_neighbors_index(self, neighbor_method) -> List[List[int]]: for param_config_index, param_config in enumerate(self.list) ) - raise NotImplementedError( - f"The neighbor method {neighbor_method} is not implemented" - ) + raise NotImplementedError(f"The neighbor method {neighbor_method} is not implemented") def get_random_sample_indices(self, num_samples: int) -> np.ndarray: - """Get the list indices for a random, non-conflicting sample""" + """Get the list indices for a random, non-conflicting sample.""" if num_samples > self.size: raise ValueError( f"The number of samples requested ({num_samples}) is greater than the searchspace size ({self.size})" @@ -313,15 +600,11 @@ def get_random_sample_indices(self, num_samples: int) -> np.ndarray: return np.random.choice(self.indices, size=num_samples, replace=False) def get_random_sample(self, num_samples: int) -> List[tuple]: - """Get the parameter configurations for a random, non-conflicting sample (caution: not unique in consecutive calls)""" - return self.get_param_configs_at_indices( - self.get_random_sample_indices(num_samples) - ) + """Get the parameter configurations for a random, non-conflicting sample (caution: not unique in consecutive calls).""" + return self.get_param_configs_at_indices(self.get_random_sample_indices(num_samples)) - def get_neighbors_indices_no_cache( - self, param_config: tuple, neighbor_method=None - ) -> List[int]: - """Get the neighbors indices for a parameter configuration (does not check running cache, useful when mixing neighbor methods)""" + def get_neighbors_indices_no_cache(self, param_config: tuple, neighbor_method=None) -> List[int]: + """Get the neighbors indices for a parameter configuration (does not check running cache, useful when mixing neighbor methods).""" param_config_index = self.get_param_config_index(param_config) # this is the simplest case, just return the cached value @@ -335,9 +618,7 @@ def get_neighbors_indices_no_cache( # check if there is a neighbor method to use if neighbor_method is None: if self.neighbor_method is None: - raise ValueError( - "Neither the neighbor_method argument nor self.neighbor_method was set" - ) + raise ValueError("Neither the neighbor_method argument nor self.neighbor_method was set") neighbor_method = self.neighbor_method if neighbor_method == "Hamming": @@ -349,33 +630,21 @@ def get_neighbors_indices_no_cache( # if the passed param_config is fictious, we can not use the pre-calculated neighbors index if neighbor_method == "strictly-adjacent": - return self.__get_neighbors_indices_strictlyadjacent( - param_config_index, param_config - ) + return self.__get_neighbors_indices_strictlyadjacent(param_config_index, param_config) if neighbor_method == "adjacent": - return self.__get_neighbors_indices_adjacent( - param_config_index, param_config - ) - raise ValueError( - f"The neighbor method {neighbor_method} is not in {supported_neighbor_methods}" - ) + return self.__get_neighbors_indices_adjacent(param_config_index, param_config) + raise ValueError(f"The neighbor method {neighbor_method} is not in {supported_neighbor_methods}") - def get_neighbors_indices( - self, param_config: tuple, neighbor_method=None - ) -> List[int]: - """Get the neighbors indices for a parameter configuration, possibly cached""" + def get_neighbors_indices(self, param_config: tuple, neighbor_method=None) -> List[int]: + """Get the neighbors indices for a parameter configuration, possibly cached.""" neighbors = self.__neighbor_cache.get(param_config, None) # if there are no cached neighbors, compute them if neighbors is None: - neighbors = self.get_neighbors_indices_no_cache( - param_config, neighbor_method - ) + neighbors = self.get_neighbors_indices_no_cache(param_config, neighbor_method) self.__neighbor_cache[param_config] = neighbors # if the neighbors were cached but the specified neighbor method was different than the one initially used to build the cache, throw an error elif ( - self.neighbor_method is not None - and neighbor_method is not None - and self.neighbor_method != neighbor_method + self.neighbor_method is not None and neighbor_method is not None and self.neighbor_method != neighbor_method ): raise ValueError( f"The neighbor method {neighbor_method} differs from the intially set {self.neighbor_method}, can not use cached neighbors. Use 'get_neighbors_no_cache()' when mixing neighbor methods to avoid this." @@ -383,27 +652,19 @@ def get_neighbors_indices( return neighbors def are_neighbors_indices_cached(self, param_config: tuple) -> bool: - """Returns true if the neighbor indices are in the cache, false otherwise""" + """Returns true if the neighbor indices are in the cache, false otherwise.""" return param_config in self.__neighbor_cache - def get_neighbors_no_cache( - self, param_config: tuple, neighbor_method=None - ) -> List[tuple]: - """Get the neighbors for a parameter configuration (does not check running cache, useful when mixing neighbor methods)""" - return self.get_param_configs_at_indices( - self.get_neighbors_indices_no_cache(param_config, neighbor_method) - ) + def get_neighbors_no_cache(self, param_config: tuple, neighbor_method=None) -> List[tuple]: + """Get the neighbors for a parameter configuration (does not check running cache, useful when mixing neighbor methods).""" + return self.get_param_configs_at_indices(self.get_neighbors_indices_no_cache(param_config, neighbor_method)) def get_neighbors(self, param_config: tuple, neighbor_method=None) -> List[tuple]: - """Get the neighbors for a parameter configuration""" - return self.get_param_configs_at_indices( - self.get_neighbors_indices(param_config, neighbor_method) - ) + """Get the neighbors for a parameter configuration.""" + return self.get_param_configs_at_indices(self.get_neighbors_indices(param_config, neighbor_method)) - def get_param_neighbors( - self, param_config: tuple, index: int, neighbor_method: str, randomize: bool - ) -> list: - """Get the neighboring parameters at an index""" + def get_param_neighbors(self, param_config: tuple, index: int, neighbor_method: str, randomize: bool) -> list: + """Get the neighboring parameters at an index.""" original_value = param_config[index] params = list( set( @@ -426,9 +687,7 @@ def order_param_configs( ) for i in range(self.num_params): if i not in order: - raise ValueError( - f"order needs to be a list of the parameter indices, but index {i} is missing" - ) + raise ValueError(f"order needs to be a list of the parameter indices, but index {i} is missing") # choose the comparison basis and add it as the first in the order base_comparison = choice(param_configs) diff --git a/kernel_tuner/strategies/basinhopping.py b/kernel_tuner/strategies/basinhopping.py index 7c591b63a..20e800f6e 100644 --- a/kernel_tuner/strategies/basinhopping.py +++ b/kernel_tuner/strategies/basinhopping.py @@ -1,23 +1,17 @@ -""" The strategy that uses the basinhopping global optimization method """ -from collections import OrderedDict - +"""The strategy that uses the basinhopping global optimization method.""" import scipy.optimize + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common -from kernel_tuner.strategies.common import (CostFunc, - setup_method_arguments, - setup_method_options) +from kernel_tuner.strategies.common import CostFunc, setup_method_arguments, setup_method_options supported_methods = ["Nelder-Mead", "Powell", "CG", "BFGS", "L-BFGS-B", "TNC", "COBYLA", "SLSQP"] -_options = OrderedDict(method=(f"Local optimization algorithm to use, choose any from {supported_methods}", "L-BFGS-B"), +_options = dict(method=(f"Local optimization algorithm to use, choose any from {supported_methods}", "L-BFGS-B"), T=("Temperature parameter for the accept or reject criterion", 1.0)) def tune(searchspace: Searchspace, runner, tuning_options): - - results = [] - method, T = common.get_options(tuning_options.strategy_options, _options) # scale variables in x to make 'eps' relevant for multiple variables diff --git a/kernel_tuner/strategies/bayes_opt.py b/kernel_tuner/strategies/bayes_opt.py index 17a0905f5..44849097f 100644 --- a/kernel_tuner/strategies/bayes_opt.py +++ b/kernel_tuner/strategies/bayes_opt.py @@ -1,4 +1,4 @@ -""" Bayesian Optimization implementation from the thesis by Willemsen """ +"""Bayesian Optimization implementation from the thesis by Willemsen.""" import itertools import time import warnings @@ -8,27 +8,27 @@ import numpy as np from scipy.stats import norm +from scipy.stats.qmc import LatinHypercube # BO imports from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies.common import CostFunc try: - from sklearn.exceptions import ConvergenceWarning from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, ConstantKernel, Matern - from skopt.sampler import Lhs + bayes_opt_present = True except ImportError: bayes_opt_present = False from kernel_tuner import util -from kernel_tuner.strategies import common supported_methods = ["poi", "ei", "lcb", "lcb-srinivas", "multi", "multi-advanced", "multi-fast"] + def generate_normalized_param_dicts(tune_params: dict, eps: float) -> Tuple[dict, dict]: - """ Generates normalization and denormalization dictionaries """ + """Generates normalization and denormalization dictionaries.""" original_to_normalized = dict() normalized_to_original = dict() for param_name in tune_params.keys(): @@ -44,14 +44,14 @@ def generate_normalized_param_dicts(tune_params: dict, eps: float) -> Tuple[dict def normalize_parameter_space(param_space: list, tune_params: dict, normalized: dict) -> list: - """ Normalize the parameter space given a normalization dictionary """ + """Normalize the parameter space given a normalization dictionary.""" keys = list(tune_params.keys()) param_space_normalized = list(tuple(normalized[keys[i]][v] for i, v in enumerate(params)) for params in param_space) return param_space_normalized def prune_parameter_space(parameter_space, tuning_options, tune_params, normalize_dict): - """ Pruning of the parameter space to remove dimensions that have a constant parameter """ + """Pruning of the parameter space to remove dimensions that have a constant parameter.""" pruned_tune_params_mask = list() removed_tune_params = list() param_names = list(tune_params.keys()) @@ -63,14 +63,22 @@ def prune_parameter_space(parameter_space, tuning_options, tune_params, normaliz value = tune_params[key][0] normalized = normalize_dict[param_names[index]][value] removed_tune_params.append(normalized) - if 'verbose' in tuning_options and tuning_options.verbose is True and len(tune_params.keys()) != sum(pruned_tune_params_mask): - print(f"Number of parameters (dimensions): {len(tune_params.keys())}, after pruning: {sum(pruned_tune_params_mask)}") - parameter_space = list(tuple(itertools.compress(param_config, pruned_tune_params_mask)) for param_config in parameter_space) + if ( + "verbose" in tuning_options + and tuning_options.verbose is True + and len(tune_params.keys()) != sum(pruned_tune_params_mask) + ): + print( + f"Number of parameters (dimensions): {len(tune_params.keys())}, after pruning: {sum(pruned_tune_params_mask)}" + ) + parameter_space = list( + tuple(itertools.compress(param_config, pruned_tune_params_mask)) for param_config in parameter_space + ) return parameter_space, removed_tune_params def tune(searchspace: Searchspace, runner, tuning_options): - """ Find the best performing kernel configuration in the parameter space + """Find the best performing kernel configuration in the parameter space. :params runner: A runner from kernel_tuner.runners :type runner: kernel_tuner.runner @@ -84,11 +92,12 @@ def tune(searchspace: Searchspace, runner, tuning_options): :rtype: list(dict()), dict() """ - max_fevals = tuning_options.strategy_options.get("max_fevals", 100) prune_parameterspace = tuning_options.strategy_options.get("pruneparameterspace", True) if not bayes_opt_present: - raise ImportError("Error: optional dependencies for Bayesian Optimization not installed, please install scikit-learn and scikit-optimize") + raise ImportError( + "Error: optional dependencies for Bayesian Optimization not installed, please install scikit-learn and scikit-optimize" + ) # epsilon for scaling should be the evenly spaced distance between the largest set of parameter options in an interval [0,1] tune_params = searchspace.tune_params @@ -106,7 +115,9 @@ def tune(searchspace: Searchspace, runner, tuning_options): if len(parameter_space) < 1: raise ValueError("Empty parameterspace after restrictionscheck. Restrictionscheck is possibly too strict.") if len(parameter_space) == 1: - raise ValueError(f"Only one configuration after restrictionscheck. Restrictionscheck is possibly too strict. Configuration: {parameter_space[0]}") + raise ValueError( + f"Only one configuration after restrictionscheck. Restrictionscheck is possibly too strict. Configuration: {parameter_space[0]}" + ) # normalize search space to [0,1] normalize_dict, denormalize_dict = generate_normalized_param_dicts(tune_params, eps) @@ -114,20 +125,26 @@ def tune(searchspace: Searchspace, runner, tuning_options): # prune the parameter space to remove dimensions that have a constant parameter if prune_parameterspace: - parameter_space, removed_tune_params = prune_parameter_space(parameter_space, tuning_options, tune_params, normalize_dict) + parameter_space, removed_tune_params = prune_parameter_space( + parameter_space, tuning_options, tune_params, normalize_dict + ) else: parameter_space = list(parameter_space) removed_tune_params = [None] * len(tune_params.keys()) # initialize and optimize try: - bo = BayesianOptimization(parameter_space, removed_tune_params, tuning_options, normalize_dict, denormalize_dict, cost_func) + bo = BayesianOptimization( + parameter_space, removed_tune_params, tuning_options, normalize_dict, denormalize_dict, cost_func + ) except util.StopCriterionReached as e: - print(f"Stop criterion reached during initialization, was popsize (default 20) greater than max_fevals or the alotted time?") + print( + "Stop criterion reached during initialization, was popsize (default 20) greater than max_fevals or the alotted time?" + ) raise e try: if max_fevals - bo.fevals <= 0: - raise ValueError(f"No function evaluations left for optimization after sampling") + raise ValueError("No function evaluations left for optimization after sampling") bo.optimize(max_fevals) except util.StopCriterionReached as e: if tuning_options.verbose: @@ -135,24 +152,43 @@ def tune(searchspace: Searchspace, runner, tuning_options): return cost_func.results -# _options dict is used for generating documentation, but is not used to check for unsupported strategy_options in bayes_opt -_options = dict(covariancekernel=('The Covariance kernel to use, choose any from "constantrbf", "rbf", "matern32", "matern52"', "matern32"), - covariancelengthscale=("The covariance length scale", 1.5), - method=("The Bayesian Optimization method to use, choose any from " + ", ".join(supported_methods), "multi-advanced"), - samplingmethod=("Method used for initial sampling the parameter space, either random or lhs", "lhs"), - popsize=("Number of initial samples", 20)) - -class BayesianOptimization(): - def __init__(self, searchspace: list, removed_tune_params: list, tuning_options: dict, normalize_dict: dict, denormalize_dict: dict, - cost_func: CostFunc, opt_direction='min'): +# _options dict is used for generating documentation, but is not used to check for unsupported strategy_options in bayes_opt +_options = dict( + covariancekernel=( + 'The Covariance kernel to use, choose any from "constantrbf", "rbf", "matern32", "matern52"', + "matern32", + ), + covariancelengthscale=("The covariance length scale", 1.5), + method=( + "The Bayesian Optimization method to use, choose any from " + ", ".join(supported_methods), + "multi-advanced", + ), + samplingmethod=( + "Method used for initial sampling the parameter space, either random or Latin Hypercube Sampling (LHS)", + "lhs", + ), + popsize=("Number of initial samples", 20), +) + + +class BayesianOptimization: + def __init__( + self, + searchspace: list, + removed_tune_params: list, + tuning_options: dict, + normalize_dict: dict, + denormalize_dict: dict, + cost_func: CostFunc, + opt_direction="min", + ): time_start = time.perf_counter_ns() # supported hyperparameter values self.supported_cov_kernels = ["constantrbf", "rbf", "matern32", "matern52"] self.supported_methods = supported_methods self.supported_sampling_methods = ["random", "lhs"] - self.supported_sampling_criterion = ["correlation", "ratio", "maximin", None] def get_hyperparam(name: str, default, supported_values=list()): value = tuning_options.strategy_options.get(name, default) @@ -166,23 +202,24 @@ def get_hyperparam(name: str, default, supported_values=list()): acquisition_function = get_hyperparam("method", "multi-advanced", self.supported_methods) acq = acquisition_function acq_params = get_hyperparam("methodparams", {}) - multi_af_names = get_hyperparam("multi_af_names", ['ei', 'poi', 'lcb']) - self.multi_afs_discount_factor = get_hyperparam("multi_af_discount_factor", 0.65 if acq == 'multi' else 0.95) - self.multi_afs_required_improvement_factor = get_hyperparam("multi_afs_required_improvement_factor", 0.15 if acq == 'multi-advanced-precise' else 0.1) + multi_af_names = get_hyperparam("multi_af_names", ["ei", "poi", "lcb"]) + self.multi_afs_discount_factor = get_hyperparam("multi_af_discount_factor", 0.65 if acq == "multi" else 0.95) + self.multi_afs_required_improvement_factor = get_hyperparam( + "multi_afs_required_improvement_factor", 0.15 if acq == "multi-advanced-precise" else 0.1 + ) self.num_initial_samples = get_hyperparam("popsize", 20) if self.num_initial_samples < 0: raise ValueError(f"Number of initial samples (popsize) must be >= 0 (given: {self.num_initial_samples})") self.sampling_method = get_hyperparam("samplingmethod", "lhs", self.supported_sampling_methods) - self.sampling_crit = get_hyperparam("samplingcriterion", 'maximin', self.supported_sampling_criterion) - self.sampling_iter = get_hyperparam("samplingiterations", 1000) + # note: more parameters are available for LHS if required: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.qmc.LatinHypercube.html # set acquisition function hyperparameter defaults where missing - if 'explorationfactor' not in acq_params: - acq_params['explorationfactor'] = 'CV' - if 'zeta' not in acq_params: - acq_params['zeta'] = 1 - if 'skip_duplicate_after' not in acq_params: - acq_params['skip_duplicate_after'] = 5 + if "explorationfactor" not in acq_params: + acq_params["explorationfactor"] = "CV" + if "zeta" not in acq_params: + acq_params["zeta"] = 1 + if "skip_duplicate_after" not in acq_params: + acq_params["skip_duplicate_after"] = 5 # set arguments self.tuning_options = tuning_options @@ -197,10 +234,10 @@ def get_hyperparam(name: str, default, supported_values=list()): # set optimization constants self.invalid_value = 1e20 self.opt_direction = opt_direction - if opt_direction == 'min': + if opt_direction == "min": self.worst_value = np.PINF self.argopt = np.argmin - elif opt_direction == 'max': + elif opt_direction == "max": self.worst_value = np.NINF self.argopt = np.argmax else: @@ -243,7 +280,10 @@ def get_hyperparam(name: str, default, supported_values=list()): time_taken_setup = round(time_setup - time_start, 3) / 1000 time_taken_initial_sample = round(time_initial_sample - time_setup, 3) / 1000 time_taken_total = round(time_initial_sample - time_start, 3) / 1000 - print(f"Initialization | total time: {time_taken_total} | Setup: {time_taken_setup} | Initial sample: {time_taken_initial_sample}", flush=True) + print( + f"Initialization | total time: {time_taken_total} | Setup: {time_taken_setup} | Initial sample: {time_taken_initial_sample}", + flush=True, + ) @property def searchspace(self): @@ -262,53 +302,57 @@ def current_optimum(self, value: float): self.__current_optimum = value def is_better_than(self, a: float, b: float) -> bool: - """ Determines which one is better depending on optimization direction """ - return a < b if self.opt_direction == 'min' else a > b + """Determines which one is better depending on optimization direction.""" + return a < b if self.opt_direction == "min" else a > b def is_not_visited(self, index: int) -> bool: - """ Returns whether a searchspace index has not been visited """ + """Returns whether a searchspace index has not been visited.""" return not self.__visited_searchspace_indices[index] def is_valid(self, observation: float) -> bool: - """ Returns whether an observation is valid """ - return not (observation == None or observation == self.invalid_value or observation == np.NaN) + """Returns whether an observation is valid.""" + return not (observation is None or observation == self.invalid_value or observation == np.NaN) def get_af_by_name(self, name: str): - """ Get the basic acquisition functions by their name """ - basic_af_names = ['ei', 'poi', 'lcb'] - if name == 'ei': + """Get the basic acquisition functions by their name.""" + basic_af_names = ["ei", "poi", "lcb"] + if name == "ei": return self.af_expected_improvement - elif name == 'poi': + elif name == "poi": return self.af_probability_of_improvement - elif name == 'lcb': + elif name == "lcb": return self.af_lower_confidence_bound raise ValueError(f"{name} not in {basic_af_names}") def set_acquisition_function(self, acquisition_function: str): - """ Set the acquisition function """ - if acquisition_function == 'poi': + """Set the acquisition function.""" + if acquisition_function == "poi": self.__af = self.af_probability_of_improvement - elif acquisition_function == 'ei': + elif acquisition_function == "ei": self.__af = self.af_expected_improvement - elif acquisition_function == 'lcb': + elif acquisition_function == "lcb": self.__af = self.af_lower_confidence_bound - elif acquisition_function == 'lcb-srinivas': + elif acquisition_function == "lcb-srinivas": self.__af = self.af_lower_confidence_bound_srinivas - elif acquisition_function == 'random': + elif acquisition_function == "random": self.__af = self.af_random - elif acquisition_function == 'multi': + elif acquisition_function == "multi": self.optimize = self.__optimize_multi - elif acquisition_function == 'multi-advanced': + elif acquisition_function == "multi-advanced": self.optimize = self.__optimize_multi_advanced - elif acquisition_function == 'multi-fast': + elif acquisition_function == "multi-fast": self.optimize = self.__optimize_multi_fast else: - raise ValueError("Acquisition function must be one of {}, is {}".format(self.supported_methods, acquisition_function)) + raise ValueError( + "Acquisition function must be one of {}, is {}".format(self.supported_methods, acquisition_function) + ) def set_surrogate_model(self, cov_kernel_name: str, cov_kernel_lengthscale: float): - """ Set the surrogate model with a covariance function and lengthscale """ + """Set the surrogate model with a covariance function and lengthscale.""" if cov_kernel_name == "constantrbf": - kernel = ConstantKernel(1.0, constant_value_bounds="fixed") * RBF(cov_kernel_lengthscale, length_scale_bounds="fixed") + kernel = ConstantKernel(1.0, constant_value_bounds="fixed") * RBF( + cov_kernel_lengthscale, length_scale_bounds="fixed" + ) elif cov_kernel_name == "rbf": kernel = RBF(length_scale=cov_kernel_lengthscale, length_scale_bounds="fixed") elif cov_kernel_name == "matern32": @@ -316,11 +360,15 @@ def set_surrogate_model(self, cov_kernel_name: str, cov_kernel_lengthscale: floa elif cov_kernel_name == "matern52": kernel = Matern(length_scale=cov_kernel_lengthscale, nu=2.5, length_scale_bounds="fixed") else: - raise ValueError("Acquisition function must be one of {}, is {}".format(self.supported_cov_kernels, cov_kernel_name)) - self.__model = GaussianProcessRegressor(kernel=kernel, alpha=1e-10, normalize_y=True) # maybe change alpha to a higher value such as 1e-5? + raise ValueError( + "Acquisition function must be one of {}, is {}".format(self.supported_cov_kernels, cov_kernel_name) + ) + self.__model = GaussianProcessRegressor( + kernel=kernel, alpha=1e-10, normalize_y=True + ) # maybe change alpha to a higher value such as 1e-5? def valid_params_observations(self) -> Tuple[list, list]: - """ Returns a list of valid observations and their parameter configurations """ + """Returns a list of valid observations and their parameter configurations.""" # if you do this every iteration, better keep it as cache and update in update_after_evaluation params = list() observations = list() @@ -331,30 +379,39 @@ def valid_params_observations(self) -> Tuple[list, list]: return params, observations def unvisited(self) -> list: - """ Returns a list of unvisited parameter configurations - attention: cached version exists! """ - params = list(self.searchspace[index] for index, visited in enumerate(self.__visited_searchspace_indices) if visited is False) + """Returns a list of unvisited parameter configurations - attention: cached version exists!""" + params = list( + self.searchspace[index] + for index, visited in enumerate(self.__visited_searchspace_indices) + if visited is False + ) return params def find_param_config_index(self, param_config: tuple) -> int: - """ Find a parameter config index in the search space if it exists """ + """Find a parameter config index in the search space if it exists.""" return self.searchspace.index(param_config) def find_param_config_unvisited_index(self, param_config: tuple) -> int: - """ Find a parameter config index in the unvisited cache if it exists """ + """Find a parameter config index in the unvisited cache if it exists.""" return self.unvisited_cache.index(param_config) def normalize_param_config(self, param_config: tuple) -> tuple: - """ Normalizes a parameter configuration """ - normalized = tuple(self.normalized_dict[self.param_names[index]][param_value] for index, param_value in enumerate(param_config)) + """Normalizes a parameter configuration.""" + normalized = tuple( + self.normalized_dict[self.param_names[index]][param_value] for index, param_value in enumerate(param_config) + ) return normalized def denormalize_param_config(self, param_config: tuple) -> tuple: - """ Denormalizes a parameter configuration """ - denormalized = tuple(self.denormalized_dict[self.param_names[index]][param_value] for index, param_value in enumerate(param_config)) + """Denormalizes a parameter configuration.""" + denormalized = tuple( + self.denormalized_dict[self.param_names[index]][param_value] + for index, param_value in enumerate(param_config) + ) return denormalized def unprune_param_config(self, param_config: tuple) -> tuple: - """ In case of pruned dimensions, adds the removed dimensions back in the param config """ + """In case of pruned dimensions, adds the removed dimensions back in the param config.""" unpruned = list() pruned_count = 0 for removed in self.removed_tune_params: @@ -366,7 +423,7 @@ def unprune_param_config(self, param_config: tuple) -> tuple: return tuple(unpruned) def update_after_evaluation(self, observation: float, index: int, param_config: tuple): - """ Adjust the visited and valid index records accordingly """ + """Adjust the visited and valid index records accordingly.""" validity = self.is_valid(observation) self.__visited_num += 1 self.__observations[index] = observation @@ -381,22 +438,22 @@ def update_after_evaluation(self, observation: float, index: int, param_config: self.current_optimum = observation def predict(self, x) -> Tuple[float, float]: - """ Returns a mean and standard deviation predicted by the surrogate model for the parameter configuration """ + """Returns a mean and standard deviation predicted by the surrogate model for the parameter configuration.""" return self.__model.predict([x], return_std=True) def predict_list(self, lst: list) -> Tuple[list, list, list]: - """ Returns a list of means and standard deviations predicted by the surrogate model for the parameter configurations, and separate lists of means and standard deviations """ + """Returns a list of means and standard deviations predicted by the surrogate model for the parameter configurations, and separate lists of means and standard deviations.""" with warnings.catch_warnings(): warnings.simplefilter("ignore") mu, std = self.__model.predict(lst, return_std=True) return list(zip(mu, std)), mu, std def fit_observations_to_model(self): - """ Update the model based on the current list of observations """ + """Update the model based on the current list of observations.""" self.__model.fit(self.__valid_params, self.__valid_observations) def evaluate_objective_function(self, param_config: tuple) -> float: - """ Evaluates the objective function """ + """Evaluates the objective function.""" param_config = self.unprune_param_config(param_config) denormalized_param_config = self.denormalize_param_config(param_config) if not util.config_valid(denormalized_param_config, self.tuning_options, self.max_threads): @@ -406,51 +463,62 @@ def evaluate_objective_function(self, param_config: tuple) -> float: return val def dimensions(self) -> list: - """ List of parameter values per parameter """ + """List of parameter values per parameter.""" return self.tune_params.values() def draw_random_sample(self) -> Tuple[list, int]: - """ Draw a random sample from the unvisited parameter configurations """ + """Draw a random sample from the unvisited parameter configurations.""" if len(self.unvisited_cache) < 1: raise ValueError("Searchspace exhausted during random sample draw as no valid configurations were found") - index = randint(0, len(self.unvisited_cache) - 1) # NOSONAR + index = randint(0, len(self.unvisited_cache) - 1) # NOSONAR param_config = self.unvisited_cache[index] actual_index = self.find_param_config_index(param_config) return param_config, actual_index def draw_latin_hypercube_samples(self, num_samples: int) -> list: - """ Draws an LHS-distributed sample from the search space """ + """Draws an LHS-distributed sample from the search space.""" + # setup, removes params with single value because they are not in the normalized searchspace if self.searchspace_size < num_samples: raise ValueError("Can't sample more than the size of the search space") - if self.sampling_crit is None: - lhs = Lhs(lhs_type="centered", criterion=None) - else: - lhs = Lhs(lhs_type="classic", criterion=self.sampling_crit, iterations=self.sampling_iter) - param_configs = lhs.generate(self.dimensions(), num_samples) + values_per_parameter = list(param for param in self.dimensions() if len(param) > 1) + num_dimensions = len(values_per_parameter) + + # draw Latin Hypercube samples + sampler = LatinHypercube(d=num_dimensions) + lower_bounds = [0 for _ in range(num_dimensions)] + upper_bounds = [len(param) for param in values_per_parameter] + samples = sampler.integers(l_bounds=lower_bounds, u_bounds=upper_bounds, n=num_samples) + param_configs = list(tuple(values_per_parameter[p_i][v_i] for p_i, v_i in enumerate(s)) for s in samples) + + # only return valid samples indices = list() normalized_param_configs = list() - for i in range(len(param_configs) - 1): + for param_config in param_configs: + normalized_param_config = self.normalize_param_config(param_config) try: - param_config = self.normalize_param_config(param_configs[i]) - index = self.find_param_config_index(param_config) + index = self.find_param_config_index(normalized_param_config) indices.append(index) - normalized_param_configs.append(param_config) + normalized_param_configs.append(normalized_param_config) except ValueError: - """ Due to search space restrictions, the search space may not be an exact cartesian product of the tunable parameter values. - It is thus possible for LHS to generate a parameter combination that is not in the actual searchspace, which must be skipped. """ + """With search space restrictions, the search space may not be a cartesian product of parameter values. + It is thus possible for LHS to generate a parameter combination that is not in the actual searchspace. + These configurations are skipped and replaced with a randomly drawn configuration. + """ continue return list(zip(normalized_param_configs, indices)) def initial_sample(self): - """ Draws an initial sample using random sampling """ + """Draws an initial sample using random sampling.""" if self.num_initial_samples <= 0: raise ValueError("At least one initial sample is required") - if self.sampling_method == 'lhs': + if self.sampling_method == "lhs": samples = self.draw_latin_hypercube_samples(self.num_initial_samples) - elif self.sampling_method == 'random': + elif self.sampling_method == "random": samples = list() else: - raise ValueError("Sampling method must be one of {}, is {}".format(self.supported_sampling_methods, self.sampling_method)) + raise ValueError( + "Sampling method must be one of {}, is {}".format(self.supported_sampling_methods, self.sampling_method) + ) # collect the samples collected_samples = 0 for params, index in samples: @@ -476,10 +544,10 @@ def initial_sample(self): self.cv_norm_maximum = self.initial_std def contextual_variance(self, std: list): - """ Contextual improvement to decide explore / exploit, based on CI proposed by (Jasrasaria, 2018) """ - if not self.af_params['explorationfactor'] == 'CV': + """Contextual improvement to decide explore / exploit, based on CI proposed by (Jasrasaria, 2018).""" + if not self.af_params["explorationfactor"] == "CV": return None - if self.opt_direction == 'min': + if self.opt_direction == "min": if self.current_optimum == self.worst_value: return 0.01 if self.current_optimum <= 0: @@ -494,7 +562,7 @@ def contextual_variance(self, std: list): return np.mean(std) / self.current_optimum def __optimize(self, max_fevals): - """ Find the next best candidate configuration(s), evaluate those and update the model accordingly """ + """Find the next best candidate configuration(s), evaluate those and update the model accordingly.""" while self.fevals < max_fevals: if self.__visited_num >= self.searchspace_size: raise ValueError(self.error_message_searchspace_fully_observed) @@ -510,13 +578,17 @@ def __optimize(self, max_fevals): self.fit_observations_to_model() def __optimize_multi(self, max_fevals): - """ Optimize with a portfolio of multiple acquisition functions. Predictions are always only taken once. Skips AFs if they suggest X/max_evals duplicates in a row, prefers AF with best discounted average. """ - if self.opt_direction != 'min': + """Optimize with a portfolio of multiple acquisition functions. + + Predictions are always only taken once. + Skips AFs if they suggest X/max_evals duplicates in a row, prefers AF with best discounted average. + """ + if self.opt_direction != "min": raise ValueError(f"Optimization direction must be minimization ('min'), is {self.opt_direction}") # calculate how many times an AF can suggest a duplicate candidate before the AF is skipped # skip_duplicates_fraction = self.af_params['skip_duplicates_fraction'] # skip_if_duplicate_n_times = int(min(max(round(skip_duplicates_fraction * max_fevals), 3), max_fevals)) - skip_if_duplicate_n_times = self.af_params['skip_duplicate_after'] + skip_if_duplicate_n_times = self.af_params["skip_duplicate_after"] discount_factor = self.multi_afs_discount_factor # setup the registration of duplicates and runtimes duplicate_count_template = [0 for _ in range(skip_if_duplicate_n_times)] @@ -574,10 +646,12 @@ def __optimize_multi(self, max_fevals): self.update_after_evaluation(observation, candidate_index, candidate_params) if observation != self.invalid_value: # we use the registered observations for maximization of the discounted reward - reg_observation = observation if self.opt_direction == 'min' else -1 * observation + reg_observation = observation if self.opt_direction == "min" else -1 * observation af_observations[actual_candidate_af_indices[index]].append(reg_observation) else: - reg_invalid_observation = initial_sample_mean if self.opt_direction == 'min' else -1 * initial_sample_mean + reg_invalid_observation = ( + initial_sample_mean if self.opt_direction == "min" else -1 * initial_sample_mean + ) af_observations[actual_candidate_af_indices[index]].append(reg_invalid_observation) for index, af_index in enumerate(duplicate_candidate_af_indices): original_observation = af_observations[duplicate_candidate_original_af_indices[index]][-1] @@ -586,7 +660,10 @@ def __optimize_multi(self, max_fevals): time_eval = time.perf_counter_ns() # assert that all observation lists of non-skipped acquisition functions are of the same length non_skipped_af_indices = list(af_index for af_index, _ in enumerate(aqfs) if af_index not in skip_af_index) - assert all(len(af_observations[non_skipped_af_indices[0]]) == len(af_observations[af_index]) for af_index in non_skipped_af_indices) + assert all( + len(af_observations[non_skipped_af_indices[0]]) == len(af_observations[af_index]) + for af_index in non_skipped_af_indices + ) # find the AFs elligble for being skipped candidates_for_skip = list() for af_index, count in enumerate(duplicate_candidate_af_count): @@ -595,8 +672,12 @@ def __optimize_multi(self, max_fevals): # do not skip the AF with the lowest runtime if len(candidates_for_skip) > 1: candidates_for_skip_discounted = list( - sum(list(obs * discount_factor**(len(observations) - 1 - i) for i, obs in enumerate(observations))) - for af_index, observations in enumerate(af_observations) if af_index in candidates_for_skip) + sum( + list(obs * discount_factor ** (len(observations) - 1 - i) for i, obs in enumerate(observations)) + ) + for af_index, observations in enumerate(af_observations) + if af_index in candidates_for_skip + ) af_not_to_skip = candidates_for_skip[np.argmin(candidates_for_skip_discounted)] for af_index in candidates_for_skip: if af_index == af_not_to_skip: @@ -617,18 +698,19 @@ def __optimize_multi(self, max_fevals): time_taken_total = round(time_af_selection - time_start, 3) / 1000 print( f"({self.fevals}/{max_fevals}) Total time: {time_taken_total} | Predictions: {time_taken_predictions} | AFs: {time_taken_afs} | Eval: {time_taken_eval} | AF selection: {time_taken_af_selection}", - flush=True) + flush=True, + ) def __optimize_multi_advanced(self, max_fevals, increase_precision=False): - """ Optimize with a portfolio of multiple acquisition functions. Predictions are only taken once, unless increase_precision is true. Skips AFs if they are consistently worse than the mean of discounted observations, promotes AFs if they are consistently better than this mean. """ - if self.opt_direction != 'min': + """Optimize with a portfolio of multiple acquisition functions. Predictions are only taken once, unless increase_precision is true. Skips AFs if they are consistently worse than the mean of discounted observations, promotes AFs if they are consistently better than this mean.""" + if self.opt_direction != "min": raise ValueError(f"Optimization direction must be minimization ('min'), is {self.opt_direction}") aqfs = self.multi_afs discount_factor = self.multi_afs_discount_factor required_improvement_factor = self.multi_afs_required_improvement_factor required_improvement_worse = 1 + required_improvement_factor required_improvement_better = 1 - required_improvement_factor - min_required_count = self.af_params['skip_duplicate_after'] + min_required_count = self.af_params["skip_duplicate_after"] skip_af_index = list() single_af = len(aqfs) <= len(skip_af_index) + 1 af_observations = [list(), list(), list()] @@ -653,7 +735,7 @@ def __optimize_multi_advanced(self, max_fevals, increase_precision=False): hyperparam = self.contextual_variance(std) list_of_acquisition_values = af(predictions, hyperparam) best_af = self.argopt(list_of_acquisition_values) - del predictions[best_af] # to avoid going out of bounds + del predictions[best_af] # to avoid going out of bounds candidate_params = self.unvisited_cache[best_af] candidate_index = self.find_param_config_index(candidate_params) observation = self.evaluate_objective_function(candidate_params) @@ -662,19 +744,25 @@ def __optimize_multi_advanced(self, max_fevals, increase_precision=False): self.fit_observations_to_model() # we use the registered observations for maximization of the discounted reward if observation != self.invalid_value: - reg_observation = observation if self.opt_direction == 'min' else -1 * observation + reg_observation = observation if self.opt_direction == "min" else -1 * observation af_observations[af_index].append(reg_observation) else: # if the observation is invalid, use the median of all valid observations to avoid skewing the discounted observations - reg_invalid_observation = observations_median if self.opt_direction == 'min' else -1 * observations_median + reg_invalid_observation = ( + observations_median if self.opt_direction == "min" else -1 * observations_median + ) af_observations[af_index].append(reg_invalid_observation) if increase_precision is False: self.fit_observations_to_model() # calculate the mean of discounted observations over the remaining acquisition functions discounted_obs = list( - sum(list(obs * discount_factor**(len(observations) - 1 - i) for i, obs in enumerate(observations))) for observations in af_observations) - disc_obs_mean = np.mean(list(discounted_obs[af_index] for af_index, _ in enumerate(aqfs) if af_index not in skip_af_index)) + sum(list(obs * discount_factor ** (len(observations) - 1 - i) for i, obs in enumerate(observations))) + for observations in af_observations + ) + disc_obs_mean = np.mean( + list(discounted_obs[af_index] for af_index, _ in enumerate(aqfs) if af_index not in skip_af_index) + ) # register which AFs perform more than 10% better than average and which more than 10% worse than average for af_index, discounted_observation in enumerate(discounted_obs): @@ -684,12 +772,17 @@ def __optimize_multi_advanced(self, max_fevals, increase_precision=False): af_performs_better_count[af_index] += 1 # find the worst AF, discounted observations is leading for a draw - worst_count = max(list(count for af_index, count in enumerate(af_performs_worse_count) if af_index not in skip_af_index)) + worst_count = max( + list(count for af_index, count in enumerate(af_performs_worse_count) if af_index not in skip_af_index) + ) af_index_worst = -1 if worst_count >= min_required_count: for af_index, count in enumerate(af_performs_worse_count): - if af_index not in skip_af_index and count == worst_count and (af_index_worst == -1 - or discounted_obs[af_index] > discounted_obs[af_index_worst]): + if ( + af_index not in skip_af_index + and count == worst_count + and (af_index_worst == -1 or discounted_obs[af_index] > discounted_obs[af_index_worst]) + ): af_index_worst = af_index # skip the worst AF @@ -706,12 +799,21 @@ def __optimize_multi_advanced(self, max_fevals, increase_precision=False): self.__af = aqfs[af_indices_left[0]] else: # find the best AF, discounted observations is leading for a draw - best_count = max(list(count for af_index, count in enumerate(af_performs_better_count) if af_index not in skip_af_index)) + best_count = max( + list( + count + for af_index, count in enumerate(af_performs_better_count) + if af_index not in skip_af_index + ) + ) af_index_best = -1 if best_count >= min_required_count: for af_index, count in enumerate(af_performs_better_count): - if af_index not in skip_af_index and count == best_count and (af_index_best == -1 - or discounted_obs[af_index] < discounted_obs[af_index_best]): + if ( + af_index not in skip_af_index + and count == best_count + and (af_index_best == -1 or discounted_obs[af_index] < discounted_obs[af_index_best]) + ): af_index_best = af_index # make the best AF single if af_index_best > -1: @@ -719,7 +821,7 @@ def __optimize_multi_advanced(self, max_fevals, increase_precision=False): self.__af = aqfs[af_index_best] def __optimize_multi_fast(self, max_fevals): - """ Optimize with a portfolio of multiple acquisition functions. Predictions are only taken once. """ + """Optimize with a portfolio of multiple acquisition functions. Predictions are only taken once.""" while self.fevals < max_fevals: aqfs = self.multi_afs # if we take the prediction only once, we want to go from most exploiting to most exploring, because the more exploiting an AF is, the more it relies on non-stale information from the model @@ -732,7 +834,7 @@ def __optimize_multi_fast(self, max_fevals): break list_of_acquisition_values = af(predictions, hyperparam) best_af = self.argopt(list_of_acquisition_values) - del predictions[best_af] # to avoid going out of bounds + del predictions[best_af] # to avoid going out of bounds candidate_params = self.unvisited_cache[best_af] candidate_index = self.find_param_config_index(candidate_params) observation = self.evaluate_objective_function(candidate_params) @@ -740,23 +842,22 @@ def __optimize_multi_fast(self, max_fevals): self.fit_observations_to_model() def af_random(self, predictions=None, hyperparam=None) -> list: - """ Acquisition function returning a randomly shuffled list for comparison """ + """Acquisition function returning a randomly shuffled list for comparison.""" list_random = range(len(self.unvisited_cache)) shuffle(list_random) return list_random def af_probability_of_improvement(self, predictions=None, hyperparam=None) -> list: - """ Acquisition function Probability of Improvement (PI) """ - + """Acquisition function Probability of Improvement (PI).""" # prefetch required data if predictions is None: predictions, _, _ = self.predict_list(self.unvisited_cache) if hyperparam is None: - hyperparam = self.af_params['explorationfactor'] + hyperparam = self.af_params["explorationfactor"] fplus = self.current_optimum - hyperparam # precompute difference of improvement - list_diff_improvement = list(-((fplus - x_mu) / (x_std + 1E-9)) for (x_mu, x_std) in predictions) + list_diff_improvement = list(-((fplus - x_mu) / (x_std + 1e-9)) for (x_mu, x_std) in predictions) # compute probability of improvement with CDF in bulk list_prob_improvement = norm.cdf(list_diff_improvement) @@ -764,17 +865,16 @@ def af_probability_of_improvement(self, predictions=None, hyperparam=None) -> li return list_prob_improvement def af_expected_improvement(self, predictions=None, hyperparam=None) -> list: - """ Acquisition function Expected Improvement (EI) """ - + """Acquisition function Expected Improvement (EI).""" # prefetch required data if predictions is None: predictions, _, _ = self.predict_list(self.unvisited_cache) if hyperparam is None: - hyperparam = self.af_params['explorationfactor'] + hyperparam = self.af_params["explorationfactor"] fplus = self.current_optimum - hyperparam # precompute difference of improvement, CDF and PDF in bulk - list_diff_improvement = list((fplus - x_mu) / (x_std + 1E-9) for (x_mu, x_std) in predictions) + list_diff_improvement = list((fplus - x_mu) / (x_std + 1e-9) for (x_mu, x_std) in predictions) list_cdf = norm.cdf(list_diff_improvement) list_pdf = norm.pdf(list_diff_improvement) @@ -789,13 +889,12 @@ def exp_improvement(index) -> float: return list_exp_improvement def af_lower_confidence_bound(self, predictions=None, hyperparam=None) -> list: - """ Acquisition function Lower Confidence Bound (LCB) """ - + """Acquisition function Lower Confidence Bound (LCB).""" # prefetch required data if predictions is None: predictions, _, _ = self.predict_list(self.unvisited_cache) if hyperparam is None: - hyperparam = self.af_params['explorationfactor'] + hyperparam = self.af_params["explorationfactor"] beta = hyperparam # compute LCB in bulk @@ -803,30 +902,30 @@ def af_lower_confidence_bound(self, predictions=None, hyperparam=None) -> list: return list_lower_confidence_bound def af_lower_confidence_bound_srinivas(self, predictions=None, hyperparam=None) -> list: - """ Acquisition function Lower Confidence Bound (UCB-S) after Srinivas, 2010 / Brochu, 2010 """ - + """Acquisition function Lower Confidence Bound (UCB-S) after Srinivas, 2010 / Brochu, 2010.""" # prefetch required data if predictions is None: predictions, _, _ = self.predict_list(self.unvisited_cache) if hyperparam is None: - hyperparam = self.af_params['explorationfactor'] + hyperparam = self.af_params["explorationfactor"] # precompute beta parameter - zeta = self.af_params['zeta'] + zeta = self.af_params["zeta"] t = self.fevals d = self.num_dimensions delta = hyperparam - beta = np.sqrt(zeta * (2 * np.log((t**(d / 2. + 2)) * (np.pi**2) / (3. * delta)))) + beta = np.sqrt(zeta * (2 * np.log((t ** (d / 2.0 + 2)) * (np.pi**2) / (3.0 * delta)))) # compute UCB in bulk list_lower_confidence_bound = list(x_mu - beta * x_std for (x_mu, x_std) in predictions) return list_lower_confidence_bound def visualize_after_opt(self): - """ Visualize the model after the optimization """ + """Visualize the model after the optimization.""" print(self.__model.kernel_.get_params()) print(self.__model.log_marginal_likelihood()) import matplotlib.pyplot as plt + _, mu, std = self.predict_list(self.searchspace) brute_force_observations = list() for param_config in self.searchspace: @@ -836,7 +935,7 @@ def visualize_after_opt(self): brute_force_observations.append(obs) x_axis = range(len(mu)) plt.fill_between(x_axis, mu - std, mu + std, alpha=0.2, antialiased=True) - plt.plot(x_axis, mu, label="predictions", linestyle=' ', marker='.') - plt.plot(x_axis, brute_force_observations, label="actual", linestyle=' ', marker='.') + plt.plot(x_axis, mu, label="predictions", linestyle=" ", marker=".") + plt.plot(x_axis, brute_force_observations, label="actual", linestyle=" ", marker=".") plt.legend() plt.show() diff --git a/kernel_tuner/strategies/common.py b/kernel_tuner/strategies/common.py index d6cf620a9..034fefd6f 100644 --- a/kernel_tuner/strategies/common.py +++ b/kernel_tuner/strategies/common.py @@ -1,9 +1,9 @@ import logging import sys -from collections import OrderedDict from time import perf_counter import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace @@ -29,12 +29,12 @@ def get_strategy_docstring(name, strategy_options): - """ Generate docstring for a 'tune' method of a strategy """ + """Generate docstring for a 'tune' method of a strategy.""" return _docstring_template.replace("$NAME$", name).replace("$STRAT_OPT$", make_strategy_options_doc(strategy_options)) def make_strategy_options_doc(strategy_options): - """ Generate documentation for the supported strategy options and their defaults """ + """Generate documentation for the supported strategy options and their defaults.""" doc = "" for opt, val in strategy_options.items(): doc += f" * {opt}: {val[0]}, default {str(val[1])}. \n" @@ -43,12 +43,12 @@ def make_strategy_options_doc(strategy_options): def get_options(strategy_options, options): - """ Get the strategy-specific options or their defaults from user-supplied strategy_options """ + """Get the strategy-specific options or their defaults from user-supplied strategy_options.""" accepted = list(options.keys()) + ["max_fevals", "time_limit"] for key in strategy_options: if key not in accepted: raise ValueError(f"Unrecognized option {key} in strategy_options") - assert isinstance(options, OrderedDict) + assert isinstance(options, dict) return [strategy_options.get(opt, default) for opt, (_, default) in options.items()] @@ -62,7 +62,7 @@ def __init__(self, searchspace: Searchspace, tuning_options, runner, *, scaling= self.results = [] def __call__(self, x, check_restrictions=True): - """ Cost function used by almost all strategies """ + """Cost function used by almost all strategies.""" self.runner.last_strategy_time = 1000 * (perf_counter() - self.runner.last_strategy_start_time) # error value to return for numeric optimizers that need a numerical value @@ -88,7 +88,7 @@ def __call__(self, x, check_restrictions=True): # else check if this is a legal (non-restricted) configuration if check_restrictions and self.searchspace.restrictions: - params_dict = OrderedDict(zip(self.searchspace.tune_params.keys(), params)) + params_dict = dict(zip(self.searchspace.tune_params.keys(), params)) legal = util.check_restrictions(self.searchspace.restrictions, params_dict, self.tuning_options.verbose) if not legal: result = params_dict @@ -115,7 +115,7 @@ def __call__(self, x, check_restrictions=True): return return_value def get_bounds_x0_eps(self): - """compute bounds, x0 (the initial guess), and eps""" + """Compute bounds, x0 (the initial guess), and eps.""" values = list(self.searchspace.tune_params.values()) if "x0" in self.tuning_options.strategy_options: @@ -154,7 +154,7 @@ def get_bounds_x0_eps(self): return bounds, x0, eps def get_bounds(self): - """ create a bounds array from the tunable parameters """ + """Create a bounds array from the tunable parameters.""" bounds = [] for values in self.searchspace.tune_params.values(): sorted_values = np.sort(values) @@ -163,7 +163,7 @@ def get_bounds(self): def setup_method_arguments(method, bounds): - """ prepare method specific arguments """ + """Prepare method specific arguments.""" kwargs = {} # pass bounds to methods that support it if method in ["L-BFGS-B", "TNC", "SLSQP"]: @@ -172,7 +172,7 @@ def setup_method_arguments(method, bounds): def setup_method_options(method, tuning_options): - """ prepare method specific options """ + """Prepare method specific options.""" kwargs = {} # Note that not all methods iterpret maxiter in the same manner @@ -200,7 +200,7 @@ def setup_method_options(method, tuning_options): def snap_to_nearest_config(x, tune_params): - """helper func that for each param selects the closest actual value""" + """Helper func that for each param selects the closest actual value.""" params = [] for i, k in enumerate(tune_params.keys()): values = np.array(tune_params[k]) @@ -210,7 +210,7 @@ def snap_to_nearest_config(x, tune_params): def unscale_and_snap_to_nearest(x, tune_params, eps): - """helper func that snaps a scaled variable to the nearest config""" + """Helper func that snaps a scaled variable to the nearest config.""" x_u = [i for i in x] for i, v in enumerate(tune_params.values()): # create an evenly spaced linear space to map [0,1]-interval @@ -232,7 +232,7 @@ def unscale_and_snap_to_nearest(x, tune_params, eps): def scale_from_params(params, tune_params, eps): - """helper func to do the inverse of the 'unscale' function""" + """Helper func to do the inverse of the 'unscale' function.""" x = np.zeros(len(params)) for i, v in enumerate(tune_params.values()): x[i] = 0.5 * eps + v.index(params[i])*eps diff --git a/kernel_tuner/strategies/diff_evo.py b/kernel_tuner/strategies/diff_evo.py index ecb257199..5ad2b9474 100644 --- a/kernel_tuner/strategies/diff_evo.py +++ b/kernel_tuner/strategies/diff_evo.py @@ -1,22 +1,20 @@ -""" The differential evolution strategy that optimizes the search through the parameter space """ -from collections import OrderedDict +"""The differential evolution strategy that optimizes the search through the parameter space.""" +from scipy.optimize import differential_evolution from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.common import CostFunc -from scipy.optimize import differential_evolution supported_methods = ["best1bin", "best1exp", "rand1exp", "randtobest1exp", "best2exp", "rand2exp", "randtobest1bin", "best2bin", "rand2bin", "rand1bin"] -_options = OrderedDict(method=(f"Creation method for new population, any of {supported_methods}", "best1bin"), +_options = dict(method=(f"Creation method for new population, any of {supported_methods}", "best1bin"), popsize=("Population size", 20), maxiter=("Number of generations", 100)) def tune(searchspace: Searchspace, runner, tuning_options): - results = [] method, popsize, maxiter = common.get_options(tuning_options.strategy_options, _options) diff --git a/kernel_tuner/strategies/dual_annealing.py b/kernel_tuner/strategies/dual_annealing.py index ebe095bde..0f44bd849 100644 --- a/kernel_tuner/strategies/dual_annealing.py +++ b/kernel_tuner/strategies/dual_annealing.py @@ -1,17 +1,14 @@ -""" The strategy that uses the dual annealing optimization method """ -from collections import OrderedDict - +"""The strategy that uses the dual annealing optimization method.""" import scipy.optimize + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common -from kernel_tuner.strategies.common import (CostFunc, - setup_method_arguments, - setup_method_options) +from kernel_tuner.strategies.common import CostFunc, setup_method_arguments, setup_method_options supported_methods = ['COBYLA', 'L-BFGS-B', 'SLSQP', 'CG', 'Powell', 'Nelder-Mead', 'BFGS', 'trust-constr'] -_options = OrderedDict(method=(f"Local optimization method to use, choose any from {supported_methods}", "Powell")) +_options = dict(method=(f"Local optimization method to use, choose any from {supported_methods}", "Powell")) def tune(searchspace: Searchspace, runner, tuning_options): diff --git a/kernel_tuner/strategies/firefly_algorithm.py b/kernel_tuner/strategies/firefly_algorithm.py index 0c053ed9c..dc43aae6f 100644 --- a/kernel_tuner/strategies/firefly_algorithm.py +++ b/kernel_tuner/strategies/firefly_algorithm.py @@ -1,15 +1,15 @@ -""" The strategy that uses the firefly algorithm for optimization""" +"""The strategy that uses the firefly algorithm for optimization.""" import sys -from collections import OrderedDict import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common -from kernel_tuner.strategies.common import (CostFunc, scale_from_params) +from kernel_tuner.strategies.common import CostFunc, scale_from_params from kernel_tuner.strategies.pso import Particle -_options = OrderedDict(popsize=("Population size", 20), +_options = dict(popsize=("Population size", 20), maxiter=("Maximum number of iterations", 100), B0=("Maximum attractiveness", 1.0), gamma=("Light absorption coefficient", 1.0), @@ -88,20 +88,20 @@ def tune(searchspace: Searchspace, runner, tuning_options): tune.__doc__ = common.get_strategy_docstring("firefly algorithm", _options) class Firefly(Particle): - """Firefly object for use in the Firefly Algorithm""" + """Firefly object for use in the Firefly Algorithm.""" def __init__(self, bounds): - """Create Firefly at random position within bounds""" + """Create Firefly at random position within bounds.""" super().__init__(bounds) self.bounds = bounds self.intensity = 1 / self.score def distance_to(self, other): - """Return Euclidian distance between self and other Firefly""" + """Return Euclidian distance between self and other Firefly.""" return np.linalg.norm(self.position-other.position) def compute_intensity(self, fun): - """Evaluate cost function and compute intensity at this position""" + """Evaluate cost function and compute intensity at this position.""" self.evaluate(fun) if self.score == sys.float_info.max: self.intensity = -sys.float_info.max @@ -109,7 +109,7 @@ def compute_intensity(self, fun): self.intensity = 1 / self.score def move_towards(self, other, beta, alpha): - """Move firefly towards another given beta and alpha values""" + """Move firefly towards another given beta and alpha values.""" self.position += beta * (other.position - self.position) self.position += alpha * (np.random.uniform(-0.5, 0.5, len(self.position))) self.position = np.minimum(self.position, [b[1] for b in self.bounds]) diff --git a/kernel_tuner/strategies/genetic_algorithm.py b/kernel_tuner/strategies/genetic_algorithm.py index 76fd84539..c29c150b5 100644 --- a/kernel_tuner/strategies/genetic_algorithm.py +++ b/kernel_tuner/strategies/genetic_algorithm.py @@ -1,14 +1,14 @@ -""" A simple genetic algorithm for parameter search """ +"""A simple genetic algorithm for parameter search.""" import random -from collections import OrderedDict import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.common import CostFunc -_options = OrderedDict( +_options = dict( popsize=("population size", 20), maxiter=("maximum number of generations", 100), method=("crossover method to use, choose any from single_point, two_point, uniform, disruptive_uniform", "uniform"), @@ -77,7 +77,7 @@ def tune(searchspace: Searchspace, runner, tuning_options): def weighted_choice(population, n): - """Randomly select n unique individuals from a weighted population, fitness determines probability of being selected""" + """Randomly select n unique individuals from a weighted population, fitness determines probability of being selected.""" def random_index_betavariate(pop_size): # has a higher probability of returning index of item at the head of the list @@ -86,7 +86,7 @@ def random_index_betavariate(pop_size): return int(random.betavariate(alpha, beta) * pop_size) def random_index_weighted(pop_size): - """use weights to increase probability of selection""" + """Use weights to increase probability of selection.""" weights = [w for _, w in population] # invert because lower is better inverted_weights = [1.0 / w for w in weights] @@ -109,8 +109,7 @@ def random_index_weighted(pop_size): def mutate(dna, mutation_chance, searchspace: Searchspace, cache=True): - """Mutate DNA with 1/mutation_chance chance""" - + """Mutate DNA with 1/mutation_chance chance.""" # this is actually a neighbors problem with Hamming distance, choose randomly from returned searchspace list if int(random.random() * mutation_chance) == 0: if cache: @@ -123,14 +122,14 @@ def mutate(dna, mutation_chance, searchspace: Searchspace, cache=True): def single_point_crossover(dna1, dna2): - """crossover dna1 and dna2 at a random index""" + """Crossover dna1 and dna2 at a random index.""" # check if you can do the crossovers using the neighbor index: check which valid parameter configuration is closest to the crossover, probably best to use "adjacent" as it is least strict? pos = int(random.random() * (len(dna1))) return (dna1[:pos] + dna2[pos:], dna2[:pos] + dna1[pos:]) def two_point_crossover(dna1, dna2): - """crossover dna1 and dna2 at 2 random indices""" + """Crossover dna1 and dna2 at 2 random indices.""" if len(dna1) < 5: start, end = 0, len(dna1) else: @@ -142,7 +141,7 @@ def two_point_crossover(dna1, dna2): def uniform_crossover(dna1, dna2): - """randomly crossover genes between dna1 and dna2""" + """Randomly crossover genes between dna1 and dna2.""" ind = np.random.random(len(dna1)) > 0.5 child1 = [dna1[i] if ind[i] else dna2[i] for i in range(len(ind))] child2 = [dna2[i] if ind[i] else dna1[i] for i in range(len(ind))] @@ -150,7 +149,7 @@ def uniform_crossover(dna1, dna2): def disruptive_uniform_crossover(dna1, dna2): - """disruptive uniform crossover + """Disruptive uniform crossover. uniformly crossover genes between dna1 and dna2, with children guaranteed to be different from parents, diff --git a/kernel_tuner/strategies/greedy_ils.py b/kernel_tuner/strategies/greedy_ils.py index 1630c6c17..a4c521746 100644 --- a/kernel_tuner/strategies/greedy_ils.py +++ b/kernel_tuner/strategies/greedy_ils.py @@ -1,6 +1,4 @@ -""" A simple greedy iterative local search algorithm for parameter search """ -from collections import OrderedDict - +"""A simple greedy iterative local search algorithm for parameter search.""" from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common @@ -8,7 +6,7 @@ from kernel_tuner.strategies.genetic_algorithm import mutate from kernel_tuner.strategies.hillclimbers import base_hillclimb -_options = OrderedDict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), +_options = dict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), restart=("controls greedyness, i.e. whether to restart from a position as soon as an improvement is found", True), no_improvement=("number of evaluations to exceed without improvement before restarting", 50), random_walk=("controls greedyness, i.e. whether to restart from a position as soon as an improvement is found", 0.3)) diff --git a/kernel_tuner/strategies/greedy_mls.py b/kernel_tuner/strategies/greedy_mls.py index 3da456aa7..1b34da501 100644 --- a/kernel_tuner/strategies/greedy_mls.py +++ b/kernel_tuner/strategies/greedy_mls.py @@ -1,12 +1,10 @@ -""" A greedy multi-start local search algorithm for parameter search """ -from collections import OrderedDict - +"""A greedy multi-start local search algorithm for parameter search.""" from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.hillclimbers import base_hillclimb -_options = OrderedDict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), +_options = dict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), restart=("controls greedyness, i.e. whether to restart from a position as soon as an improvement is found", True), order=("set a user-specified order to search among dimensions while hillclimbing", None), randomize=("use a random order to search among dimensions while hillclimbing", True)) diff --git a/kernel_tuner/strategies/minimize.py b/kernel_tuner/strategies/minimize.py index 952d18d2c..80c1c6f82 100644 --- a/kernel_tuner/strategies/minimize.py +++ b/kernel_tuner/strategies/minimize.py @@ -1,22 +1,20 @@ -""" The strategy that uses a minimizer method for searching through the parameter space """ -import logging -import sys -from collections import OrderedDict -from time import perf_counter +"""The strategy that uses a minimizer method for searching through the parameter space.""" -import numpy as np import scipy.optimize + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace -from kernel_tuner.strategies.common import (CostFunc, - get_options, - get_strategy_docstring, - setup_method_arguments, - setup_method_options) +from kernel_tuner.strategies.common import ( + CostFunc, + get_options, + get_strategy_docstring, + setup_method_arguments, + setup_method_options, +) supported_methods = ["Nelder-Mead", "Powell", "CG", "BFGS", "L-BFGS-B", "TNC", "COBYLA", "SLSQP"] -_options = OrderedDict(method=(f"Local optimization algorithm to use, choose any from {supported_methods}", "L-BFGS-B")) +_options = dict(method=(f"Local optimization algorithm to use, choose any from {supported_methods}", "L-BFGS-B")) def tune(searchspace: Searchspace, runner, tuning_options): diff --git a/kernel_tuner/strategies/mls.py b/kernel_tuner/strategies/mls.py index f075424b4..b8ecf030c 100644 --- a/kernel_tuner/strategies/mls.py +++ b/kernel_tuner/strategies/mls.py @@ -1,11 +1,9 @@ -""" The strategy that uses multi-start local search """ -from collections import OrderedDict - +"""The strategy that uses multi-start local search.""" from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.greedy_mls import tune as mls_tune -_options = OrderedDict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), +_options = dict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), restart=("controls greedyness, i.e. whether to restart from a position as soon as an improvement is found", False), order=("set a user-specified order to search among dimensions while hillclimbing", None), randomize=("use a random order to search among dimensions while hillclimbing", True)) diff --git a/kernel_tuner/strategies/ordered_greedy_mls.py b/kernel_tuner/strategies/ordered_greedy_mls.py index fd0f9030a..cd40ba778 100644 --- a/kernel_tuner/strategies/ordered_greedy_mls.py +++ b/kernel_tuner/strategies/ordered_greedy_mls.py @@ -1,11 +1,9 @@ -""" A greedy multi-start local search algorithm for parameter search that traverses variables in order.""" -from collections import OrderedDict - +"""A greedy multi-start local search algorithm for parameter search that traverses variables in order.""" from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.greedy_mls import tune as mls_tune -_options = OrderedDict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), +_options = dict(neighbor=("Method for selecting neighboring nodes, choose from Hamming or adjacent", "Hamming"), restart=("controls greedyness, i.e. whether to restart from a position as soon as an improvement is found", True), order=("set a user-specified order to search among dimensions while hillclimbing", None), randomize=("use a random order to search among dimensions while hillclimbing", False)) diff --git a/kernel_tuner/strategies/pso.py b/kernel_tuner/strategies/pso.py index 37caedc7f..5b0df1429 100644 --- a/kernel_tuner/strategies/pso.py +++ b/kernel_tuner/strategies/pso.py @@ -1,16 +1,15 @@ -""" The strategy that uses particle swarm optimization""" +"""The strategy that uses particle swarm optimization.""" import random import sys -from collections import OrderedDict import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common -from kernel_tuner.strategies.common import (CostFunc, - scale_from_params) +from kernel_tuner.strategies.common import CostFunc, scale_from_params -_options = OrderedDict(popsize=("Population size", 20), +_options = dict(popsize=("Population size", 20), maxiter=("Maximum number of iterations", 100), w=("Inertia weight constant", 0.5), c1=("Cognitive constant", 2.0), diff --git a/kernel_tuner/strategies/random_sample.py b/kernel_tuner/strategies/random_sample.py index 77e69505d..022eda534 100644 --- a/kernel_tuner/strategies/random_sample.py +++ b/kernel_tuner/strategies/random_sample.py @@ -1,13 +1,12 @@ -""" Iterate over a random sample of the parameter space """ -from collections import OrderedDict - +"""Iterate over a random sample of the parameter space.""" import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.common import CostFunc -_options = OrderedDict(fraction=("Fraction of the search space to cover value in [0, 1]", 0.1)) +_options = dict(fraction=("Fraction of the search space to cover value in [0, 1]", 0.1)) def tune(searchspace: Searchspace, runner, tuning_options): diff --git a/kernel_tuner/strategies/simulated_annealing.py b/kernel_tuner/strategies/simulated_annealing.py index 883e6ff98..dce929b7b 100644 --- a/kernel_tuner/strategies/simulated_annealing.py +++ b/kernel_tuner/strategies/simulated_annealing.py @@ -1,15 +1,15 @@ -""" The strategy that uses particle swarm optimization""" +"""The strategy that uses particle swarm optimization.""" import random import sys -from collections import OrderedDict import numpy as np + from kernel_tuner import util from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common from kernel_tuner.strategies.common import CostFunc -_options = OrderedDict(T=("Starting temperature", 1.0), +_options = dict(T=("Starting temperature", 1.0), T_min=("End temperature", 0.001), alpha=("Alpha parameter", 0.995), maxiter=("Number of iterations within each annealing step", 1)) @@ -86,7 +86,7 @@ def tune(searchspace: Searchspace, runner, tuning_options): tune.__doc__ = common.get_strategy_docstring("Simulated Annealing", _options) def acceptance_prob(old_cost, new_cost, T, tuning_options): - """annealing equation, with modifications to work towards a lower value""" + """Annealing equation, with modifications to work towards a lower value.""" error_val = sys.float_info.max if not tuning_options.objective_higher_is_better else -sys.float_info.max # if start pos is not valid, always move if old_cost == error_val: @@ -104,7 +104,7 @@ def acceptance_prob(old_cost, new_cost, T, tuning_options): def neighbor(pos, searchspace: Searchspace): - """return a random neighbor of pos""" + """Return a random neighbor of pos.""" # Note: this is not the same as the previous implementation, because it is possible that non-edge parameters remain the same, but suggested configurations will all be within restrictions neighbors = searchspace.get_neighbors(tuple(pos), neighbor_method='Hamming') if random.random() < 0.2 else searchspace.get_neighbors(tuple(pos), neighbor_method='strictly-adjacent') if len(neighbors) > 0: diff --git a/kernel_tuner/util.py b/kernel_tuner/util.py index fbf949ffd..81b45b163 100644 --- a/kernel_tuner/util.py +++ b/kernel_tuner/util.py @@ -1,19 +1,35 @@ -""" Module for kernel tuner utility functions """ -import time -from inspect import signature +"""Module for kernel tuner utility functions.""" +from __future__ import annotations + +import errno import json -from collections import OrderedDict +import logging import os +import re import sys -import errno import tempfile -import logging +import time import warnings -import re +from inspect import signature from types import FunctionType +from typing import Optional, Union import numpy as np -from constraint import Constraint, AllDifferentConstraint, AllEqualConstraint, MaxSumConstraint, ExactSumConstraint, MinSumConstraint, InSetConstraint, NotInSetConstraint, SomeInSetConstraint, SomeNotInSetConstraint, FunctionConstraint +from constraint import ( + AllDifferentConstraint, + AllEqualConstraint, + Constraint, + ExactSumConstraint, + FunctionConstraint, + InSetConstraint, + MaxProdConstraint, + MaxSumConstraint, + MinProdConstraint, + MinSumConstraint, + NotInSetConstraint, + SomeInSetConstraint, + SomeNotInSetConstraint, +) from kernel_tuner.accuracy import Tunable @@ -32,7 +48,6 @@ class ErrorConfig(str): - def __str__(self): return self.__class__.__name__ @@ -53,7 +68,7 @@ class RuntimeFailedConfig(ErrorConfig): class NpEncoder(json.JSONEncoder): - """ Class we use for dumping Numpy objects to JSON """ + """Class we use for dumping Numpy objects to JSON.""" def default(self, obj): if isinstance(obj, np.integer): @@ -65,18 +80,17 @@ def default(self, obj): return super(NpEncoder, self).default(obj) -class TorchPlaceHolder(): - +class TorchPlaceHolder: def __init__(self): - self.Tensor = Exception #using Exception here as a type that will never be among kernel arguments + self.Tensor = Exception # using Exception here as a type that will never be among kernel arguments class SkippableFailure(Exception): - """Exception used to raise when compiling or launching a kernel fails for a reason that can be expected""" + """Exception used to raise when compiling or launching a kernel fails for a reason that can be expected.""" class StopCriterionReached(Exception): - """Exception thrown when a stop criterion has been reached""" + """Exception thrown when a stop criterion has been reached.""" try: @@ -88,7 +102,7 @@ class StopCriterionReached(Exception): def check_argument_type(dtype, kernel_argument): - """check if the numpy.dtype matches the type used in the code""" + """Check if the numpy.dtype matches the type used in the code.""" types_map = { "bool": ["bool"], "uint8": ["uchar", "unsigned char", "uint8_t"], @@ -96,22 +110,22 @@ def check_argument_type(dtype, kernel_argument): "uint16": ["ushort", "unsigned short", "uint16_t"], "int16": ["short", "int16_t"], "uint32": ["uint", "unsigned int", "uint32_t"], - "int32": ["int", "int32_t"], # discrepancy between OpenCL and C here, long may be 32bits in C + "int32": ["int", "int32_t"], # discrepancy between OpenCL and C here, long may be 32bits in C "uint64": ["ulong", "unsigned long", "uint64_t"], "int64": ["long", "int64_t"], "float16": ["half"], "float32": ["float"], "float64": ["double"], "complex64": ["float2"], - "complex128": ["double2"] + "complex128": ["double2"], } if dtype in types_map: return any([substr in kernel_argument for substr in types_map[dtype]]) - return False # unknown dtype. do not throw exception to still allow kernel to run. + return False # unknown dtype. do not throw exception to still allow kernel to run. def check_argument_list(kernel_name, kernel_string, args): - """ raise an exception if a kernel arguments do not match host arguments """ + """Raise an exception if a kernel arguments do not match host arguments.""" kernel_arguments = list() collected_errors = list() for iterator in re.finditer(kernel_name + "[ \n\t]*" + r"\(", kernel_string): @@ -124,7 +138,7 @@ def check_argument_list(kernel_name, kernel_string, args): if len(arguments) != len(args): collected_errors[arguments_set].append("Kernel and host argument lists do not match in size.") continue - for (i, arg) in enumerate(args): + for i, arg in enumerate(args): kernel_argument = arguments[i] # Fix to deal with tunable arguments @@ -132,17 +146,30 @@ def check_argument_list(kernel_name, kernel_string, args): continue if not isinstance(arg, (np.ndarray, np.generic, cp.ndarray, torch.Tensor)): - raise TypeError("Argument at position " + str(i) + " of type: " + str(type(arg)) + " should be of type np.ndarray or numpy scalar") + raise TypeError( + "Argument at position " + + str(i) + + " of type: " + + str(type(arg)) + + " should be of type np.ndarray or numpy scalar" + ) correct = True - if isinstance(arg, np.ndarray) and not "*" in kernel_argument: - correct = False # array is passed to non-pointer kernel argument + if isinstance(arg, np.ndarray) and "*" not in kernel_argument: + correct = False # array is passed to non-pointer kernel argument if correct and check_argument_type(str(arg.dtype), kernel_argument): continue - collected_errors[arguments_set].append("Argument at position " + str(i) + " of dtype: " + str(arg.dtype) + " does not match " + kernel_argument + - ".") + collected_errors[arguments_set].append( + "Argument at position " + + str(i) + + " of dtype: " + + str(arg.dtype) + + " does not match " + + kernel_argument + + "." + ) if not collected_errors[arguments_set]: # We assume that if there is a possible list of arguments that matches with the provided one # it is the right one @@ -152,21 +179,21 @@ def check_argument_list(kernel_name, kernel_string, args): def check_stop_criterion(to): - """ checks if max_fevals is reached or time limit is exceeded """ + """Checks if max_fevals is reached or time limit is exceeded.""" if "max_fevals" in to and len(to.unique_results) >= to.max_fevals: raise StopCriterionReached("max_fevals reached") if "time_limit" in to and (((time.perf_counter() - to.start_time) + (to.simulated_time * 1e-3)) > to.time_limit): raise StopCriterionReached("time limit exceeded") -def check_tune_params_list(tune_params, observers): - """ raise an exception if a tune parameter has a forbidden name """ +def check_tune_params_list(tune_params, observers, simulation_mode=False): + """Raise an exception if a tune parameter has a forbidden name.""" forbidden_names = ("grid_size_x", "grid_size_y", "grid_size_z", "time") for name, param in tune_params.items(): if name in forbidden_names: raise ValueError("Tune parameter " + name + " with value " + str(param) + " has a forbidden name!") if any("nvml_" in param for param in tune_params): - if not observers or not any(isinstance(obs, NVMLObserver) for obs in observers): + if not simulation_mode and (not observers or not any(isinstance(obs, NVMLObserver) for obs in observers)): raise ValueError("Tune parameters starting with nvml_ require an NVMLObserver!") @@ -193,14 +220,16 @@ def check_block_size_params_names_list(block_size_names, tune_params): if block_size_names is not None: for name in block_size_names: if name not in tune_params.keys(): - warnings.warn("Block size name " + name + " is not specified in the tunable parameters list!", UserWarning) - else: # if default block size names are used + warnings.warn( + "Block size name " + name + " is not specified in the tunable parameters list!", UserWarning + ) + else: # if default block size names are used if not any([k in default_block_size_names for k in tune_params.keys()]): warnings.warn("None of the tunable parameters specify thread block dimensions!", UserWarning) def check_restrictions(restrictions, params: dict, verbose: bool): - """ check whether a specific instance meets the search space restrictions """ + """Check whether a specific instance meets the search space restrictions.""" valid = True if callable(restrictions): valid = restrictions(params) @@ -214,7 +243,11 @@ def check_restrictions(restrictions, params: dict, verbose: bool): valid = False break # if it's a string, fill in the parameters and evaluate - elif not eval(replace_param_occurrences(restrict, params)): + elif isinstance(restrict, str) and not eval(replace_param_occurrences(restrict, params)): + valid = False + break + # if it's a function, call it + elif callable(restrict) and not restrict(params): valid = False break except ZeroDivisionError: @@ -225,21 +258,28 @@ def check_restrictions(restrictions, params: dict, verbose: bool): def convert_constraint_restriction(restrict: Constraint): - """ Convert the python-constraint to a function for backwards compatibility """ + """Convert the python-constraint to a function for backwards compatibility.""" if isinstance(restrict, FunctionConstraint): - f_restrict = lambda p: restrict._func(*p) + def f_restrict(p): + return restrict._func(*p) elif isinstance(restrict, AllDifferentConstraint): - f_restrict = lambda p: len(set(p)) == len(p) + def f_restrict(p): + return len(set(p)) == len(p) elif isinstance(restrict, AllEqualConstraint): - f_restrict = lambda p: all(x == p[0] for x in p) + def f_restrict(p): + return all(x == p[0] for x in p) elif isinstance(restrict, MaxProdConstraint): - f_restrict = lambda p: np.prod(p) <= restrict._exactsum + def f_restrict(p): + return np.prod(p) <= restrict._maxprod elif isinstance(restrict, MaxSumConstraint): - f_restrict = lambda p: sum(p) <= restrict._exactsum + def f_restrict(p): + return sum(p) <= restrict._maxsum elif isinstance(restrict, ExactSumConstraint): - f_restrict = lambda p: sum(p) == restrict._exactsum + def f_restrict(p): + return sum(p) == restrict._exactsum elif isinstance(restrict, MinSumConstraint): - f_restrict = lambda p: sum(p) >= restrict._exactsum + def f_restrict(p): + return sum(p) >= restrict._minsum elif isinstance(restrict, (InSetConstraint, NotInSetConstraint, SomeInSetConstraint, SomeNotInSetConstraint)): raise NotImplementedError( f"Restriction of the type {type(restrict)} is explicitely not supported in backwards compatibility mode, because the behaviour is too complex. Please rewrite this constraint to a function to use it with this algorithm." @@ -250,15 +290,15 @@ def convert_constraint_restriction(restrict: Constraint): def check_thread_block_dimensions(params, max_threads, block_size_names=None): - """ check on maximum thread block dimensions """ + """Check on maximum thread block dimensions.""" dims = get_thread_block_dimensions(params, block_size_names) return np.prod(dims) <= max_threads def config_valid(config, tuning_options, max_threads): - """ combines restrictions and a check on the max thread block dimension to check config validity """ + """Combines restrictions and a check on the max thread block dimension to check config validity.""" legal = True - params = OrderedDict(zip(tuning_options.tune_params.keys(), config)) + params = dict(zip(tuning_options.tune_params.keys(), config)) if tuning_options.restrictions: legal = check_restrictions(tuning_options.restrictions, params, False) if not legal: @@ -269,7 +309,7 @@ def config_valid(config, tuning_options, max_threads): def delete_temp_file(filename): - """ delete a temporary file, don't complain if no longer exists """ + """Delete a temporary file, don't complain if no longer exists.""" try: os.remove(filename) except OSError as e: @@ -278,7 +318,7 @@ def delete_temp_file(filename): def detect_language(kernel_string): - """attempt to detect language from the kernel_string""" + """Attempt to detect language from the kernel_string.""" if "__global__" in kernel_string: lang = "CUDA" elif "__kernel" in kernel_string: @@ -289,7 +329,7 @@ def detect_language(kernel_string): def get_best_config(results, objective, objective_higher_is_better=False): - """ Returns the best configuration from a list of results according to some objective """ + """Returns the best configuration from a list of results according to some objective.""" func = max if objective_higher_is_better else min ignore_val = sys.float_info.max if not objective_higher_is_better else -sys.float_info.max best_config = func(results, key=lambda x: x[objective] if isinstance(x[objective], float) else ignore_val) @@ -297,7 +337,7 @@ def get_best_config(results, objective, objective_higher_is_better=False): def get_config_string(params, keys=None, units=None): - """ return a compact string representation of a measurement """ + """Return a compact string representation of a measurement.""" def compact_number(v): if isinstance(v, float): @@ -322,7 +362,7 @@ def compact_number(v): def get_grid_dimensions(current_problem_size, params, grid_div, block_size_names): - """compute grid dims based on problem sizes and listed grid divisors""" + """Compute grid dims based on problem sizes and listed grid divisors.""" def get_dimension_divisor(divisor_list, default, params): if divisor_list is None: @@ -340,14 +380,12 @@ def get_dimension_divisor(divisor_list, default, params): def get_instance_string(params): - """ combine the parameters to a string mostly used for debug output - use of OrderedDict is advised - """ + """Combine the parameters to a string mostly used for debug output use of dict is advised.""" return "_".join([str(i) for i in params.values()]) def get_kernel_string(kernel_source, params=None): - """ retrieve the kernel source and return as a string + """Retrieve the kernel source and return as a string. This function processes the passed kernel_source argument, which could be a function, a string with a filename, or just a string with code already. @@ -372,7 +410,7 @@ def get_kernel_string(kernel_source, params=None): :rtype: string """ # logging.debug('get_kernel_string called with %s', str(kernel_source)) - logging.debug('get_kernel_string called') + logging.debug("get_kernel_string called") kernel_string = None if callable(kernel_source): @@ -388,11 +426,11 @@ def get_kernel_string(kernel_source, params=None): def get_problem_size(problem_size, params): - """compute current problem size""" + """Compute current problem size.""" if callable(problem_size): problem_size = problem_size(params) if isinstance(problem_size, (str, int, np.integer)): - problem_size = (problem_size, ) + problem_size = (problem_size,) current_problem_size = [1, 1, 1] for i, s in enumerate(problem_size): if isinstance(s, str): @@ -405,28 +443,30 @@ def get_problem_size(problem_size, params): def get_smem_args(smem_args, params): - """ return a dict with kernel instance specific size """ + """Return a dict with kernel instance specific size.""" result = smem_args.copy() - if 'size' in result: - size = result['size'] + if "size" in result: + size = result["size"] if callable(size): size = size(params) elif isinstance(size, str): size = replace_param_occurrences(size, params) size = int(eval(size)) - result['size'] = size + result["size"] = size return result def get_temp_filename(suffix=None): - """ return a string in the form of temp_X, where X is a large integer """ - tmp_file = tempfile.mkstemp(suffix=suffix or "", prefix="temp_", dir=os.getcwd()) # or "" for Python 2 compatibility + """Return a string in the form of temp_X, where X is a large integer.""" + tmp_file = tempfile.mkstemp( + suffix=suffix or "", prefix="temp_", dir=os.getcwd() + ) # or "" for Python 2 compatibility os.close(tmp_file[0]) return tmp_file[1] def get_thread_block_dimensions(params, block_size_names=None): - """thread block size from tuning params, currently using convention""" + """Thread block size from tuning params, currently using convention.""" if not block_size_names: block_size_names = default_block_size_names @@ -437,7 +477,7 @@ def get_thread_block_dimensions(params, block_size_names=None): def get_total_timings(results, env, overhead_time): - """ Sum all timings and put their totals in the env """ + """Sum all timings and put their totals in the env.""" total_framework_time = 0 total_strategy_time = 0 total_compile_time = 0 @@ -445,34 +485,41 @@ def get_total_timings(results, env, overhead_time): total_benchmark_time = 0 if results: for result in results: - if 'framework_time' not in result or 'strategy_time' not in result or 'compile_time' not in result or 'verification_time' not in result: - #warnings.warn("No detailed timings in results") + if ( + "framework_time" not in result + or "strategy_time" not in result + or "compile_time" not in result + or "verification_time" not in result + ): + # warnings.warn("No detailed timings in results") return env - total_framework_time += result['framework_time'] - total_strategy_time += result['strategy_time'] - total_compile_time += result['compile_time'] - total_verification_time += result['verification_time'] - total_benchmark_time += result['benchmark_time'] + total_framework_time += result["framework_time"] + total_strategy_time += result["strategy_time"] + total_compile_time += result["compile_time"] + total_verification_time += result["verification_time"] + total_benchmark_time += result["benchmark_time"] # add the seperate times to the environment dict - env['total_framework_time'] = total_framework_time - env['total_strategy_time'] = total_strategy_time - env['total_compile_time'] = total_compile_time - env['total_verification_time'] = total_verification_time - env['total_benchmark_time'] = total_benchmark_time - if 'simulated_time' in env: - overhead_time += env['simulated_time'] - env['overhead_time'] = overhead_time - (total_framework_time + total_strategy_time + total_compile_time + total_verification_time + total_benchmark_time) + env["total_framework_time"] = total_framework_time + env["total_strategy_time"] = total_strategy_time + env["total_compile_time"] = total_compile_time + env["total_verification_time"] = total_verification_time + env["total_benchmark_time"] = total_benchmark_time + if "simulated_time" in env: + overhead_time += env["simulated_time"] + env["overhead_time"] = overhead_time - ( + total_framework_time + total_strategy_time + total_compile_time + total_verification_time + total_benchmark_time + ) return env def print_config(config, tuning_options, runner): - """print the configuration string with tunable parameters and benchmark results""" + """Print the configuration string with tunable parameters and benchmark results.""" print_config_output(tuning_options.tune_params, config, runner.quiet, tuning_options.metrics, runner.units) def print_config_output(tune_params, params, quiet, metrics, units): - """print the configuration string with tunable parameters and benchmark results""" + """Print the configuration string with tunable parameters and benchmark results.""" print_keys = list(tune_params.keys()) + ["time"] if metrics: print_keys += metrics.keys() @@ -482,35 +529,38 @@ def print_config_output(tune_params, params, quiet, metrics, units): def process_metrics(params, metrics): - """ process user-defined metrics for derived benchmark results + """Process user-defined metrics for derived benchmark results. - Metrics must be an OrderedDict to support composable metrics. The dictionary keys describe + Metrics must be a dictionary to support composable metrics. The dictionary keys describe the name given to this user-defined metric and will be used as the key in the results dictionaries return by Kernel Tuner. The values describe how to calculate the user-defined metric, using either a string expression in which the tunable parameters and benchmark results can be used as variables, or as a function that accepts a dictionary as argument. + Example: - metrics = OrderedDict() + metrics = dict() metrics["x"] = "10000 / time" metrics["x2"] = "x*x" Note that the values in the metric dictionary can also be functions that accept params as argument. + + Example: - metrics = OrderedDict() + metrics = dict() metrics["GFLOP/s"] = lambda p : 10000 / p["time"] :param params: A dictionary with tunable parameters and benchmark results. :type params: dict - :param metrics: An OrderedDict with user-defined metrics that can be used to create derived benchmark results. - :type metrics: OrderedDict + :param metrics: A dictionary with user-defined metrics that can be used to create derived benchmark results. + :type metrics: dict :returns: An updated params dictionary with the derived metrics inserted along with the benchmark results. :rtype: dict """ - if not isinstance(metrics, OrderedDict): - raise ValueError("metrics should be an OrderedDict to preserve order and support composability") + if not isinstance(metrics, dict): + raise ValueError("metrics should be a dictionary to preserve order and support composability") for k, v in metrics.items(): if isinstance(v, str): value = eval(replace_param_occurrences(v, params)) @@ -518,15 +568,14 @@ def process_metrics(params, metrics): value = v(params) else: raise ValueError("metric dicts values should be strings or callable") - # We overwrite any existing values for the given key params[k] = value return params def looks_like_a_filename(kernel_source): - """ attempt to detect whether source code or a filename was passed """ - logging.debug('looks_like_a_filename called') + """Attempt to detect whether source code or a filename was passed.""" + logging.debug("looks_like_a_filename called") result = False if isinstance(kernel_source, str): result = True @@ -543,12 +592,12 @@ def looks_like_a_filename(kernel_source): result = False # string must contain substring ".c", ".opencl", or ".F" result = result and any([s in kernel_source for s in (".c", ".opencl", ".F")]) - logging.debug('kernel_source is a filename: %s' % str(result)) + logging.debug("kernel_source is a filename: %s" % str(result)) return result def prepare_kernel_string(kernel_name, kernel_string, params, grid, threads, block_size_names, lang, defines): - """ prepare kernel string for compilation + """Prepare kernel string for compilation. Prepends the kernel with a series of C preprocessor defines specific to this kernel instance: @@ -587,7 +636,7 @@ def prepare_kernel_string(kernel_name, kernel_string, params, grid, threads, blo :rtype: string """ - logging.debug('prepare_kernel_string called for %s', kernel_name) + logging.debug("prepare_kernel_string called for %s", kernel_name) kernel_prefix = "" @@ -597,7 +646,7 @@ def prepare_kernel_string(kernel_name, kernel_string, params, grid, threads, blo # * each tunable parameter # * kernel_tuner=1 if defines is None: - defines = OrderedDict() + defines = dict() grid_dim_names = ["grid_size_x", "grid_size_y", "grid_size_z"] for i, g in enumerate(grid): @@ -630,7 +679,7 @@ def prepare_kernel_string(kernel_name, kernel_string, params, grid, threads, blo # in OpenCL this isn't the case and we can just insert "#define loop_unroll_factor N" # using 0 to disable specifying a loop unrolling factor for this loop if v == "0": - kernel_string = re.sub(r"\n\s*#pragma\s+unroll\s+" + k, "\n", kernel_string) # + r"[^\S]*" + kernel_string = re.sub(r"\n\s*#pragma\s+unroll\s+" + k, "\n", kernel_string) # + r"[^\S]*" else: kernel_prefix += f"constexpr int {k} = {v};\n" else: @@ -648,18 +697,18 @@ def prepare_kernel_string(kernel_name, kernel_string, params, grid, threads, blo def read_file(filename): - """ return the contents of the file named filename or None if file not found """ + """Return the contents of the file named filename or None if file not found.""" if os.path.isfile(filename): - with open(filename, 'r') as f: + with open(filename, "r") as f: return f.read() -def replace_param_occurrences(string, params): - """replace occurrences of the tuning params with their current value""" - result = '' +def replace_param_occurrences(string: str, params: dict): + """Replace occurrences of the tuning params with their current value.""" + result = "" # Split on tokens and replace a token if it is a key in `params`. - for part in re.split('([a-zA-Z0-9_]+)', string): + for part in re.split("([a-zA-Z0-9_]+)", string): if part in params: result += str(params[part]) else: @@ -669,7 +718,7 @@ def replace_param_occurrences(string, params): def setup_block_and_grid(problem_size, grid_div, params, block_size_names=None): - """compute problem size, thread block and grid dimensions for this kernel""" + """Compute problem size, thread block and grid dimensions for this kernel.""" threads = get_thread_block_dimensions(params, block_size_names) current_problem_size = get_problem_size(problem_size, params) grid = get_grid_dimensions(current_problem_size, params, grid_div, block_size_names) @@ -677,13 +726,13 @@ def setup_block_and_grid(problem_size, grid_div, params, block_size_names=None): def write_file(filename, string): - """dump the contents of string to a file called filename""" + """Dump the contents of string to a file called filename.""" # ugly fix, hopefully we can find a better one if sys.version_info[0] >= 3: - with open(filename, 'w', encoding="utf-8") as f: + with open(filename, "w", encoding="utf-8") as f: f.write(string) else: - with open(filename, 'w') as f: + with open(filename, "w") as f: f.write(string.encode("utf-8")) @@ -705,47 +754,245 @@ def has_kw_argument(func, name): if v is None: return None - if has_kw_argument(v, 'atol'): + if has_kw_argument(v, "atol"): return v return lambda answer, result_host, atol: v(answer, result_host) -def parse_restrictions(restrictions: list, tune_params: dict): - """ parses restrictions from a list of strings into a compilable function """ - +def parse_restrictions(restrictions: list[str], tune_params: dict, monolithic = False, try_to_constraint = True) -> list[tuple[Union[Constraint, str], list[str]]]: + """Parses restrictions from a list of strings into compilable functions and constraints, or a single compilable function (if monolithic is True). Returns a list of tuples of (strings or constraints) and parameters.""" # rewrite the restrictions so variables are singled out regex_match_variable = r"([a-zA-Z_$][a-zA-Z_$0-9]*)" def replace_params(match_object): key = match_object.group(1) if key in tune_params: - return 'params["' + key + '"]' + param = str(key) + return "params[params_index['" + param + "']]" + else: + return key + + def replace_params_split(match_object): + # careful: has side-effect of adding to set `params_used` + key = match_object.group(1) + if key in tune_params: + param = str(key) + params_used.add(param) + return param else: return key - parsed = ") and (".join([re.sub(regex_match_variable, replace_params, res) for res in restrictions]) + def to_multiple_restrictions(restrictions: list[str]) -> list[str]: + """Split the restrictions into multiple restriction where possible (e.g. 3 <= x * y < 9 <= z -> [(MinProd(3), [x, y]), (MaxProd(9-1), [x, y]), (MinProd(9), [z])]).""" + split_restrictions = list() + for res in restrictions: + # if there are logic chains in the restriction, skip splitting further + if " and " in res or " or " in res: + split_restrictions.append(res) + continue + # find the indices of splittable comparators + comparators = ['<=', '>=', '>', '<'] + comparators_indices = [(m.start(0), m.end(0)) for m in re.finditer('|'.join(comparators), res)] + if len(comparators_indices) <= 1: + # this can't be split further + split_restrictions.append(res) + continue + # split the restrictions from the previous to the next comparator + for index in range(len(comparators_indices)): + temp_copy = res + prev_stop = comparators_indices[index-1][1] + 1 if index > 0 else 0 + next_stop = comparators_indices[index+1][0] if index < len(comparators_indices) - 1 else len(temp_copy) + split_restrictions.append(temp_copy[prev_stop:next_stop].strip()) + return split_restrictions + + def to_numeric_constraint(restriction: str, params: list[str]) -> Optional[Union[MinSumConstraint, ExactSumConstraint, MaxSumConstraint, MaxProdConstraint]]: + """Converts a restriction to a built-in numeric constraint if possible.""" + comparators = ['<=', '==', '>=', '>', '<'] + comparators_found = re.findall('|'.join(comparators), restriction) + # check if there is exactly one comparator, if not, return None + if len(comparators_found) != 1: + return None + comparator = comparators_found[0] + + # split the string on the comparison and remove leading and trailing whitespace + left, right = tuple(s.strip() for s in restriction.split(comparator)) + + # find out which side is the constant number + def is_or_evals_to_number(s: str) -> Optional[Union[int, float]]: + try: + # check if it's a number or solvable to a number (e.g. '32*2') + number = eval(s) + assert isinstance(number, (int, float)) + return number + except Exception: + # it's not a solvable subexpression, return None + return None + + # either the left or right side of the equation must evaluate to a constant number + left_num = is_or_evals_to_number(left) + right_num = is_or_evals_to_number(right) + if (left_num is None and right_num is None) or (left_num is not None and right_num is not None): + # left_num and right_num can't be both None or both a constant + return None + number, variables, variables_on_left = (left_num, right.strip(), False) if left_num is not None else (right_num, left.strip(), True) + + # if the number is an integer, we can map '>' to '>=' and '<' to '<=' by changing the number (does not work with floating points!) + number_is_int = isinstance(number, int) + if number_is_int: + if comparator == '<': + # (2 < x) == (2+1 <= x) + number += 1 + elif comparator == '>': + # (2 > x) == (2-1 >= x) + number -= 1 + + # check if an operator is applied on the variables, if not return + operators = [r'\*\*', r'\*', r'\+'] + operators_found = re.findall(str('|'.join(operators)), variables) + if len(operators_found) == 0: + # no operators found, return only based on comparator + if len(params) != 1 or variables not in params: + # there were more than one variable but no operator + return None + # map to a Constraint + # if there are restrictions with a single variable, it will be used to prune the domain at the start + elif comparator == '==': + return ExactSumConstraint(number) + elif comparator == '<=' or (comparator == '<' and number_is_int): + return MaxSumConstraint(number) if variables_on_left else MinSumConstraint(number) + elif comparator == '>=' or (comparator == '>' and number_is_int): + return MinSumConstraint(number) if variables_on_left else MaxSumConstraint(number) + raise ValueError(f"Invalid comparator {comparator}") + + # check which operator is applied on the variables + operator = operators_found[0] + if not all(o == operator for o in operators_found): + # if the operator is inconsistent (e.g. 'x + y * z == 3'), return None + return None + + # split the string on the comparison + splitted = variables.split(operator) + # check if there are only pure, non-recurring variables (no operations or constants) in the restriction + if len(splitted) == len(params) and all(s.strip() in params for s in splitted): + # map to a Constraint + if operator == '**': + # power operations are not (yet) supported, added to avoid matching the double asterisk + return None + elif operator == '*': + if comparator == '<=' or (comparator == '<' and number_is_int): + return MaxProdConstraint(number) if variables_on_left else MinProdConstraint(number) + elif comparator == '>=' or (comparator == '>' and number_is_int): + return MinProdConstraint(number) if variables_on_left else MaxProdConstraint(number) + elif operator == '+': + if comparator == '==': + return ExactSumConstraint(number) + elif comparator == '<=' or (comparator == '<' and number_is_int): + return MaxSumConstraint(number) if variables_on_left else MinSumConstraint(number) + elif comparator == '>=' or (comparator == '>' and number_is_int): + return MinSumConstraint(number) if variables_on_left else MaxSumConstraint(number) + else: + raise ValueError(f"Invalid operator {operator}") + return None + + def to_equality_constraint(restriction: str, params: list[str]) -> Optional[Union[AllEqualConstraint, AllDifferentConstraint]]: + """Converts a restriction to either an equality or inequality constraint on all the parameters if possible.""" + # check if all parameters are involved + if len(params) != len(tune_params): + return None + + # find whether (in)equalities appear in this restriction + equalities_found = re.findall('==', restriction) + inequalities_found = re.findall('!=', restriction) + # check if one of the two have been found, if none or both have been found, return None + if not (len(equalities_found) > 0 ^ len(inequalities_found) > 0): + return None + comparator = equalities_found[0] if len(equalities_found) > 0 else inequalities_found[0] + + # split the string on the comparison + splitted = restriction.split(comparator) + # check if there are only pure, non-recurring variables (no operations or constants) in the restriction + if len(splitted) == len(params) and all(s.strip() in params for s in splitted): + # map to a Constraint + if comparator == '==': + return AllEqualConstraint() + elif comparator == '!=': + return AllDifferentConstraint() + return ValueError(f"Not possible: comparator should be '==' or '!=', is {comparator}") + return None + + # create the parsed restrictions + if monolithic is False: + # split into multiple restrictions where possible + if try_to_constraint: + restrictions = to_multiple_restrictions(restrictions) + # split into functions that only take their relevant parameters + parsed_restrictions = list() + for res in restrictions: + params_used: set[str] = set() + parsed_restriction = re.sub(regex_match_variable, replace_params_split, res).strip() + params_used = list(params_used) + finalized_constraint = None + if try_to_constraint and " or " not in res and " and " not in res: + # check if we can turn this into the built-in numeric comparison constraint + finalized_constraint = to_numeric_constraint(parsed_restriction, params_used) + if finalized_constraint is None: + # check if we can turn this into the built-in equality comparison constraint + finalized_constraint = to_equality_constraint(parsed_restriction, params_used) + if finalized_constraint is None: + # we must turn it into a general function + finalized_constraint = f"def r({', '.join(params_used)}): return {parsed_restriction} \n" + parsed_restrictions.append((finalized_constraint, params_used)) + else: + # create one monolithic function + parsed_restrictions = ") and (".join([re.sub(regex_match_variable, replace_params, res) for res in restrictions]) + + # tidy up the code by removing the last suffix and unnecessary spaces + parsed_restrictions = "(" + parsed_restrictions.strip() + ")" + parsed_restrictions = " ".join(parsed_restrictions.split()) - # tidy up the code by removing the last suffix and unnecessary spaces - parsed_restrictions = "(" + parsed.strip() + ")" - parsed_restrictions = " ".join(parsed_restrictions.split()) + # provide a mapping of the parameter names to the index in the tuple received + params_index = dict(zip(tune_params.keys(), range(len(tune_params.keys())))) - parsed_restrictions = f"def restrictions(params): return {parsed_restrictions} \n" + parsed_restrictions = [(f"def restrictions(*params): params_index = {params_index}; return {parsed_restrictions} \n", list(tune_params.keys()))] return parsed_restrictions -def compile_restrictions(restrictions: list, tune_params: dict): - """ parses restrictions from a list of strings into a callable function """ - parsed_restrictions = parse_restrictions(restrictions, tune_params) +def compile_restrictions(restrictions: list, tune_params: dict, monolithic = False, try_to_constraint = True) -> list[tuple[Union[str, Constraint, FunctionType], list[str]]]: + """Parses restrictions from a list of strings into a list of strings, Functions, or Constraints (if `try_to_constraint`) and parameters used, or a single Function if monolithic is true.""" + # filter the restrictions to get only the strings + restrictions_str, restrictions_ignore = [], [] + for r in restrictions: + (restrictions_str if isinstance(r, str) else restrictions_ignore).append(r) + if len(restrictions_str) == 0: + return restrictions_ignore + + # parse the strings + parsed_restrictions = parse_restrictions(restrictions_str, tune_params, monolithic=monolithic, try_to_constraint=try_to_constraint) + + # compile the parsed restrictions into a function + compiled_restrictions: list[tuple] = list() + for restriction, params_used in parsed_restrictions: + if isinstance(restriction, str): + # if it's a string, parse it to a function + code_object = compile(restriction, "", "exec") + func = FunctionType(code_object.co_consts[0], globals()) + compiled_restrictions.append((func, params_used)) + elif isinstance(restriction, Constraint): + # otherwise it already is a Constraint, pass it directly + compiled_restrictions.append((restriction, params_used)) + else: + raise ValueError(f"Restriction {restriction} is neither a string or Constraint {type(restriction)}") - # actually compile - code_object = compile(parsed_restrictions, '', 'exec') - func = FunctionType(code_object.co_consts[0], globals()) - return func + # return the restrictions and used parameters + if len(restrictions_ignore) == 0: + return compiled_restrictions + restrictions_ignore = list(zip(restrictions_ignore, (() for _ in restrictions_ignore))) + return restrictions_ignore + compiled_restrictions def process_cache(cache, kernel_options, tuning_options, runner): - """cache file for storing tuned configurations + """Cache file for storing tuned configurations. the cache file is stored using JSON and uses the following format: @@ -768,16 +1015,16 @@ def process_cache(cache, kernel_options, tuning_options, runner): from an earlier (abruptly ended) tuning session. """ - # caching only works correctly if tunable_parameters are stored in a OrderedDict - if not isinstance(tuning_options.tune_params, OrderedDict): - raise ValueError("Caching only works correctly when tunable parameters are stored in a OrderedDict") + # caching only works correctly if tunable_parameters are stored in a dictionary + if not isinstance(tuning_options.tune_params, dict): + raise ValueError("Caching only works correctly when tunable parameters are stored in a dictionary") # if file does not exist, create new cache if not os.path.isfile(cache): if tuning_options.simulation_mode: raise ValueError(f"Simulation mode requires an existing cachefile: file {cache} does not exist") - c = OrderedDict() + c = dict() c["device_name"] = runner.dev.name c["kernel_name"] = kernel_options.kernel_name c["problem_size"] = kernel_options.problem_size if not callable(kernel_options.problem_size) else "callable" @@ -786,7 +1033,7 @@ def process_cache(cache, kernel_options, tuning_options, runner): c["objective"] = tuning_options.objective c["cache"] = {} - contents = json.dumps(c, cls=NpEncoder, indent="")[:-3] # except the last "}\n}" + contents = json.dumps(c, cls=NpEncoder, indent="")[:-3] # except the last "}\n}" # write the header to the cachefile with open(cache, "w") as cachefile: @@ -820,17 +1067,22 @@ def process_cache(cache, kernel_options, tuning_options, runner): raise ValueError("Cannot load cache which contains results for different problem_size") if cached_data["tune_params_keys"] != list(tuning_options.tune_params.keys()): if all(key in tuning_options.tune_params for key in cached_data["tune_params_keys"]): - raise ValueError(f"All tunable parameters are present, but the order is wrong. \ - Cache has order: {cached_data['tune_params_keys']}, tuning_options has: {list(tuning_options.tune_params.keys())}") - raise ValueError(f"Cannot load cache which contains results obtained with different tunable parameters. \ - Cache has: {cached_data['tune_params_keys']}, tuning_options has: {list(tuning_options.tune_params.keys())}") + raise ValueError( + f"All tunable parameters are present, but the order is wrong. \ + This is not possible because the order must be preserved to lookup the correct configuration in the cache. \ + Cache has order: {cached_data['tune_params_keys']}, tuning_options has: {list(tuning_options.tune_params.keys())}" + ) + raise ValueError( + f"Cannot load cache which contains results obtained with different tunable parameters. \ + Cache has: {cached_data['tune_params_keys']}, tuning_options has: {list(tuning_options.tune_params.keys())}" + ) tuning_options.cachefile = cache tuning_options.cache = cached_data["cache"] def read_cache(cache, open_cache=True): - """ Read the cachefile into a dictionary, if open_cache=True prepare the cachefile for appending """ + """Read the cachefile into a dictionary, if open_cache=True prepare the cachefile for appending.""" with open(cache, "r") as cachefile: filestr = cachefile.read().strip() @@ -849,7 +1101,7 @@ def read_cache(cache, open_cache=True): error_configs = { "InvalidConfig": InvalidConfig(), "CompilationFailedConfig": CompilationFailedConfig(), - "RuntimeFailedConfig": RuntimeFailedConfig() + "RuntimeFailedConfig": RuntimeFailedConfig(), } # replace strings with ErrorConfig instances @@ -876,11 +1128,10 @@ def close_cache(cache): def store_cache(key, params, tuning_options): - """ stores a new entry (key, params) to the cachefile """ - - #logging.debug('store_cache called, cache=%s, cachefile=%s' % (tuning_options.cache, tuning_options.cachefile)) + """Stores a new entry (key, params) to the cachefile.""" + # logging.debug('store_cache called, cache=%s, cachefile=%s' % (tuning_options.cache, tuning_options.cachefile)) if isinstance(tuning_options.cache, dict): - if not key in tuning_options.cache: + if key not in tuning_options.cache: tuning_options.cache[key] = params # Convert ErrorConfig objects to string, wanted to do this inside the JSONconverter but couldn't get it to work @@ -891,61 +1142,18 @@ def store_cache(key, params, tuning_options): if tuning_options.cachefile: with open(tuning_options.cachefile, "a") as cachefile: - cachefile.write("\n" + json.dumps({ key: output_params }, cls=NpEncoder)[1:-1] + ",") + cachefile.write("\n" + json.dumps({key: output_params}, cls=NpEncoder)[1:-1] + ",") def dump_cache(obj: str, tuning_options): - """ dumps a string in the cache, this omits the several checks of store_cache() to speed up the process - with great power comes great responsibility! """ + """Dumps a string in the cache, this omits the several checks of store_cache() to speed up the process - with great power comes great responsibility!""" if isinstance(tuning_options.cache, dict) and tuning_options.cachefile: with open(tuning_options.cachefile, "a") as cachefile: cachefile.write(obj) -class MaxProdConstraint(Constraint): - """ Constraint enforcing that values of given variables create a product up to a given amount """ - - def __init__(self, maxprod): - """ Instantiate a MaxProdConstraint - - :params maxprod: Value to be considered as the maximum product - :type maxprod: number - - """ - self._maxprod = maxprod - - def preProcess(self, variables, domains, constraints, vconstraints): - """ """ - Constraint.preProcess(self, variables, domains, constraints, vconstraints) - maxprod = self._maxprod - for variable in variables: - domain = domains[variable] - for value in domain[:]: - if value > maxprod: - domain.remove(value) - - def __call__(self, variables, domains, assignments, forwardcheck=False): - maxprod = self._maxprod - prod = 1 - for variable in variables: - if variable in assignments: - prod *= assignments[variable] - if isinstance(prod, float): - prod = round(prod, 10) - if prod > maxprod: - return False - if forwardcheck: - for variable in variables: - if variable not in assignments: - domain = domains[variable] - for value in domain[:]: - if prod * value > maxprod: - domain.hideValue(value) - if not domain: - return False - return True - def cuda_error_check(error): - """ Checking the status of CUDA calls using the NVIDIA cuda-python backend """ + """Checking the status of CUDA calls using the NVIDIA cuda-python backend.""" if isinstance(error, cuda.CUresult): if error != cuda.CUresult.CUDA_SUCCESS: _, name = cuda.cuGetErrorName(error) @@ -957,4 +1165,4 @@ def cuda_error_check(error): elif isinstance(error, nvrtc.nvrtcResult): if error != nvrtc.nvrtcResult.NVRTC_SUCCESS: _, desc = nvrtc.nvrtcGetErrorString(error) - raise RuntimeError(f"NVRTC error: {desc.decode()}") \ No newline at end of file + raise RuntimeError(f"NVRTC error: {desc.decode()}") diff --git a/noxfile.py b/noxfile.py new file mode 100644 index 000000000..9b4bc0473 --- /dev/null +++ b/noxfile.py @@ -0,0 +1,181 @@ +"""Configuration file for the Nox test runner. + +This instantiates the specified sessions in isolated environments and runs the tests. +This allows for locally mirroring the testing occuring with GitHub-actions. +Be careful that the general setup of tests is left to pyproject.toml. +""" + + +import platform +from pathlib import Path + +import nox +from nox_poetry import Session, session + +# set the test parameters +python_versions_to_test = ["3.8", "3.9", "3.10", "3.11"] +nox.options.stop_on_first_error = True +nox.options.error_on_missing_interpreters = True + +# set the default environment from the 'noxenv' file, if it exists +environment_file_path = Path("./noxenv.txt") +if environment_file_path.exists(): + env_values = ('none', 'virtualenv', 'conda', 'mamba', 'venv') # from https://nox.thea.codes/en/stable/usage.html#changing-the-sessions-default-backend + environment = environment_file_path.read_text() + assert isinstance(environment, str), "File 'noxenv.txt' does not contain text" + environment = environment.strip() + assert environment in env_values, f"File 'noxenv.txt' contains {environment}, must be one of {','.join(env_values)}" + nox.options.default_venv_backend = environment + + +# @nox.session +# def lint(session: nox.Session) -> None: +# """Ensure the code is formatted as expected.""" +# session.install("ruff") +# session.run("ruff", "--format=github", "--config=pyproject.toml", ".") + + +# @session # uncomment this line to only run on the current python interpreter +@session(python=python_versions_to_test) # missing versions can be installed with `pyenv install ...` +# do not forget check / set the versions with `pyenv global`, or `pyenv local` in case of virtual environment +def tests(session: Session) -> None: + """Run the tests for the specified Python versions.""" + # check if optional dependencies have been disabled by user arguments (e.g. `nox -- skip-gpu`, `nox -- skip-cuda`) + install_cuda = True + install_hip = True + install_opencl = True + install_additional_tests = False + small_disk = False + if session.posargs: + for arg in session.posargs: + if arg.lower() == "skip-gpu": + install_cuda = False + install_hip = False + install_opencl = False + break + elif arg.lower() == "skip-cuda": + install_cuda = False + elif arg.lower() == "skip-hip": + install_hip = False + elif arg.lower() == "skip-opencl": + install_opencl = False + elif arg.lower() == "additional-tests": + install_additional_tests = True + elif arg.lower() == "small-disk": + small_disk = True + else: + raise ValueError(f"Unrecognized argument {arg}") + + # check if there are optional dependencies that can not be installed + if install_hip: + if platform.system().lower() != 'linux': + session.warn("HIP is only available on Linux, disabling dependency and tests") + install_hip = False + full_install = install_cuda and install_hip and install_opencl and install_additional_tests + + # if the user has a small disk, remove the other environment caches before each session is ran + if small_disk: + try: + session_folder = session.name.replace('.', '*').strip() + folders_to_delete: str = session.run( + "find", "./.nox", "-mindepth", "1", "-maxdepth", "1", "-type", "d", "-not", "-name", session_folder, + silent=True, external=True) + folders_to_delete: list[str] = folders_to_delete.split('\n') + for folder_to_delete in folders_to_delete: + if len(folder_to_delete) > 0: + session.warn(f"Removing environment cache {folder_to_delete} because of 'small-disk' argument") + session.run("rm", "-rf", folder_to_delete, external=True) + except Exception as error: + session.warn("Could not delete Nox caching directories, reason:") + session.warn(error) + + # remove temporary files leftover from the previous session + session.run("rm", "-f", "temp_*.c", external=True) + + # set extra arguments based on optional dependencies + extras_args = [] + if install_cuda: + extras_args.extend(["-E", "cuda"]) + if install_hip: + extras_args.extend(["-E", "hip"]) + if install_opencl: + extras_args.extend(["-E", "opencl"]) + + # separately install optional dependencies with weird dependencies / build process + install_warning = """Installation failed, this likely means that the required hardware or drivers are missing. + Run with `-- skip-gpu` or one of the more specific options (e.g. `-- skip-cuda`) to avoid this.""" + if install_cuda: + # if we need to install the CUDA extras, first install pycuda seperately. + # since version 2022.2 it has `oldest-supported-numpy` as a build dependency which doesn't work with Poetry + try: + session.install("pycuda") # Attention: if changed, check `pycuda` in pyproject.toml as well + except Exception as error: + print(error) + session.warn(install_warning) + if install_opencl and (session.python == "3.7" or session.python == "3.8"): + # if we need to install the OpenCL extras, first install pyopencl seperately. + # it has `oldest-supported-numpy` as a build dependency which doesn't work with Poetry, but only for Python<3.9 + try: + session.install("pyopencl") # Attention: if changed, check `pyopencl` in pyproject.toml as well + except Exception as error: + print(error) + session.warn(install_warning) + + # finally, install the dependencies, optional dependencies and the package itself + try: + session.run_always("poetry", "install", "--with", "test", *extras_args, external=True) + except Exception as error: + session.warn(install_warning) + raise error + + # if applicable, install the dependencies for additional tests + if install_additional_tests and install_cuda: + install_additional_warning = """ + Installation failed, this likely means that the required hardware or drivers are missing. + Run without `-- additional-tests` to avoid this.""" + import re + try: + session.install("cuda-python") + except Exception as error: + print(error) + session.warn(install_additional_warning) + try: + # use NVCC to get the CUDA version + nvcc_output: str = session.run("nvcc", "--version", silent=True) + nvcc_output = "".join(nvcc_output.splitlines()) # convert to single string for easier REGEX + cuda_version = re.match(r"^.*release ([0-9]+.[0-9]+).*$", nvcc_output, flags=re.IGNORECASE).group(1).strip() + session.warn(f"Detected CUDA version: {cuda_version}") + try: + try: + # based on the CUDA version, try installing the exact prebuilt cupy version + cuda_cupy_version = f"cupy-cuda{''.join(cuda_version.split('.'))}" + session.install(cuda_cupy_version) + except Exception: + # if the exact prebuilt is not available, try the more general prebuilt + cuda_cupy_version_x = f"cupy-cuda{cuda_version.split('.')[0]}x" + session.warn(f"CuPy exact prebuilt not available for {cuda_version}, trying {cuda_cupy_version_x}") + session.install(cuda_cupy_version_x) + except Exception: + # if no compatible prebuilt wheel is found, try building CuPy ourselves + session.warn(f"No prebuilt CuPy found for CUDA {cuda_version}, building from source...") + session.install("cupy") + except Exception as error: + print(error) + session.warn(install_additional_warning) + + # for the last Python version session if all optional dependencies are enabled: + if session.python == python_versions_to_test[-1] and full_install: + # run pytest on the package to generate the correct coverage report + session.run("pytest") + else: + # for the other Python version sessions: + # run pytest without coverage reporting + session.run("pytest", "--no-cov") + + # warn if no coverage report + if not full_install: + session.warn(""" + Tests ran successfully, but only a subset. + Coverage file not generated. + Run with 'additional-tests' and without 'skip-gpu', 'skip-cuda' etc. to avoid this. + """) diff --git a/poetry.lock b/poetry.lock new file mode 100644 index 000000000..9e517cb82 --- /dev/null +++ b/poetry.lock @@ -0,0 +1,3256 @@ +# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand. + +[[package]] +name = "alabaster" +version = "0.7.13" +description = "A configurable sidebar-enabled Sphinx theme" +optional = false +python-versions = ">=3.6" +files = [ + {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"}, + {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"}, +] + +[[package]] +name = "anyio" +version = "4.0.0" +description = "High level compatibility layer for multiple asynchronous event loop implementations" +optional = true +python-versions = ">=3.8" +files = [ + {file = "anyio-4.0.0-py3-none-any.whl", hash = "sha256:cfdb2b588b9fc25ede96d8db56ed50848b0b649dca3dd1df0b11f683bb9e0b5f"}, + {file = "anyio-4.0.0.tar.gz", hash = "sha256:f7ed51751b2c2add651e5747c891b47e26d2a21be5d32d9311dfe9692f3e5d7a"}, +] + +[package.dependencies] +exceptiongroup = {version = ">=1.0.2", markers = "python_version < \"3.11\""} +idna = ">=2.8" +sniffio = ">=1.1" + +[package.extras] +doc = ["Sphinx (>=7)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)"] +test = ["anyio[trio]", "coverage[toml] (>=7)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (>=0.17)"] +trio = ["trio (>=0.22)"] + +[[package]] +name = "appdirs" +version = "1.4.4" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +optional = true +python-versions = "*" +files = [ + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, +] + +[[package]] +name = "appnope" +version = "0.1.3" +description = "Disable App Nap on macOS >= 10.9" +optional = false +python-versions = "*" +files = [ + {file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"}, + {file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"}, +] + +[[package]] +name = "argcomplete" +version = "3.1.2" +description = "Bash tab completion for argparse" +optional = false +python-versions = ">=3.6" +files = [ + {file = "argcomplete-3.1.2-py3-none-any.whl", hash = "sha256:d97c036d12a752d1079f190bc1521c545b941fda89ad85d15afa909b4d1b9a99"}, + {file = "argcomplete-3.1.2.tar.gz", hash = "sha256:d5d1e5efd41435260b8f85673b74ea2e883affcbec9f4230c582689e8e78251b"}, +] + +[package.extras] +test = ["coverage", "mypy", "pexpect", "ruff", "wheel"] + +[[package]] +name = "argon2-cffi" +version = "23.1.0" +description = "Argon2 for Python" +optional = true +python-versions = ">=3.7" +files = [ + {file = "argon2_cffi-23.1.0-py3-none-any.whl", hash = "sha256:c670642b78ba29641818ab2e68bd4e6a78ba53b7eff7b4c3815ae16abf91c7ea"}, + {file = "argon2_cffi-23.1.0.tar.gz", hash = "sha256:879c3e79a2729ce768ebb7d36d4609e3a78a4ca2ec3a9f12286ca057e3d0db08"}, +] + +[package.dependencies] +argon2-cffi-bindings = "*" + +[package.extras] +dev = ["argon2-cffi[tests,typing]", "tox (>4)"] +docs = ["furo", "myst-parser", "sphinx", "sphinx-copybutton", "sphinx-notfound-page"] +tests = ["hypothesis", "pytest"] +typing = ["mypy"] + +[[package]] +name = "argon2-cffi-bindings" +version = "21.2.0" +description = "Low-level CFFI bindings for Argon2" +optional = true +python-versions = ">=3.6" +files = [ + {file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"}, + {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"}, + {file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"}, + {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"}, + {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"}, + {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"}, + {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"}, + {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"}, + {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"}, + {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"}, + {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"}, + {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"}, + {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"}, +] + +[package.dependencies] +cffi = ">=1.0.1" + +[package.extras] +dev = ["cogapp", "pre-commit", "pytest", "wheel"] +tests = ["pytest"] + +[[package]] +name = "arrow" +version = "1.2.3" +description = "Better dates & times for Python" +optional = true +python-versions = ">=3.6" +files = [ + {file = "arrow-1.2.3-py3-none-any.whl", hash = "sha256:5a49ab92e3b7b71d96cd6bfcc4df14efefc9dfa96ea19045815914a6ab6b1fe2"}, + {file = "arrow-1.2.3.tar.gz", hash = "sha256:3934b30ca1b9f292376d9db15b19446088d12ec58629bc3f0da28fd55fb633a1"}, +] + +[package.dependencies] +python-dateutil = ">=2.7.0" + +[[package]] +name = "asttokens" +version = "2.4.0" +description = "Annotate AST trees with source code positions" +optional = false +python-versions = "*" +files = [ + {file = "asttokens-2.4.0-py2.py3-none-any.whl", hash = "sha256:cf8fc9e61a86461aa9fb161a14a0841a03c405fa829ac6b202670b3495d2ce69"}, + {file = "asttokens-2.4.0.tar.gz", hash = "sha256:2e0171b991b2c959acc6c49318049236844a5da1d65ba2672c4880c1c894834e"}, +] + +[package.dependencies] +six = ">=1.12.0" + +[package.extras] +test = ["astroid", "pytest"] + +[[package]] +name = "async-lru" +version = "2.0.4" +description = "Simple LRU cache for asyncio" +optional = true +python-versions = ">=3.8" +files = [ + {file = "async-lru-2.0.4.tar.gz", hash = "sha256:b8a59a5df60805ff63220b2a0c5b5393da5521b113cd5465a44eb037d81a5627"}, + {file = "async_lru-2.0.4-py3-none-any.whl", hash = "sha256:ff02944ce3c288c5be660c42dbcca0742b32c3b279d6dceda655190240b99224"}, +] + +[package.dependencies] +typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.11\""} + +[[package]] +name = "attrs" +version = "23.1.0" +description = "Classes Without Boilerplate" +optional = false +python-versions = ">=3.7" +files = [ + {file = "attrs-23.1.0-py3-none-any.whl", hash = "sha256:1f28b4522cdc2fb4256ac1a020c78acf9cba2c6b461ccd2c126f3aa8e8335d04"}, + {file = "attrs-23.1.0.tar.gz", hash = "sha256:6279836d581513a26f1bf235f9acd333bc9115683f14f7e8fae46c98fc50e015"}, +] + +[package.extras] +cov = ["attrs[tests]", "coverage[toml] (>=5.3)"] +dev = ["attrs[docs,tests]", "pre-commit"] +docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"] +tests = ["attrs[tests-no-zope]", "zope-interface"] +tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] + +[[package]] +name = "babel" +version = "2.12.1" +description = "Internationalization utilities" +optional = false +python-versions = ">=3.7" +files = [ + {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"}, + {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"}, +] + +[package.dependencies] +pytz = {version = ">=2015.7", markers = "python_version < \"3.9\""} + +[[package]] +name = "backcall" +version = "0.2.0" +description = "Specifications for callback functions passed in to an API" +optional = false +python-versions = "*" +files = [ + {file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"}, + {file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"}, +] + +[[package]] +name = "beautifulsoup4" +version = "4.12.2" +description = "Screen-scraping library" +optional = false +python-versions = ">=3.6.0" +files = [ + {file = "beautifulsoup4-4.12.2-py3-none-any.whl", hash = "sha256:bd2520ca0d9d7d12694a53d44ac482d181b4ec1888909b035a3dbf40d0f57d4a"}, + {file = "beautifulsoup4-4.12.2.tar.gz", hash = "sha256:492bbc69dca35d12daac71c4db1bfff0c876c00ef4a2ffacce226d4638eb72da"}, +] + +[package.dependencies] +soupsieve = ">1.2" + +[package.extras] +html5lib = ["html5lib"] +lxml = ["lxml"] + +[[package]] +name = "bleach" +version = "6.0.0" +description = "An easy safelist-based HTML-sanitizing tool." +optional = false +python-versions = ">=3.7" +files = [ + {file = "bleach-6.0.0-py3-none-any.whl", hash = "sha256:33c16e3353dbd13028ab4799a0f89a83f113405c766e9c122df8a06f5b85b3f4"}, + {file = "bleach-6.0.0.tar.gz", hash = "sha256:1a1a85c1595e07d8db14c5f09f09e6433502c51c595970edc090551f0db99414"}, +] + +[package.dependencies] +six = ">=1.9.0" +webencodings = "*" + +[package.extras] +css = ["tinycss2 (>=1.1.0,<1.2)"] + +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +files = [ + {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, + {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] + +[[package]] +name = "cffi" +version = "1.15.1" +description = "Foreign Function Interface for Python calling C code." +optional = false +python-versions = "*" +files = [ + {file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"}, + {file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"}, + {file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"}, + {file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"}, + {file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"}, + {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"}, + {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"}, + {file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"}, + {file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"}, + {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"}, + {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"}, + {file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"}, + {file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"}, + {file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"}, + {file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"}, + {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"}, + {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"}, + {file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"}, + {file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"}, + {file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"}, + {file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"}, + {file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"}, + {file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"}, + {file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"}, + {file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"}, + {file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"}, + {file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"}, + {file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"}, + {file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"}, + {file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"}, + {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"}, + {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"}, + {file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"}, + {file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"}, + {file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"}, +] + +[package.dependencies] +pycparser = "*" + +[[package]] +name = "charset-normalizer" +version = "3.2.0" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7.0" +files = [ + {file = "charset-normalizer-3.2.0.tar.gz", hash = "sha256:3bb3d25a8e6c0aedd251753a79ae98a093c7e7b471faa3aa9a93a81431987ace"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b87549028f680ca955556e3bd57013ab47474c3124dc069faa0b6545b6c9710"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7c70087bfee18a42b4040bb9ec1ca15a08242cf5867c58726530bdf3945672ed"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a103b3a7069b62f5d4890ae1b8f0597618f628b286b03d4bc9195230b154bfa9"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94aea8eff76ee6d1cdacb07dd2123a68283cb5569e0250feab1240058f53b623"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db901e2ac34c931d73054d9797383d0f8009991e723dab15109740a63e7f902a"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b0dac0ff919ba34d4df1b6131f59ce95b08b9065233446be7e459f95554c0dc8"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:193cbc708ea3aca45e7221ae58f0fd63f933753a9bfb498a3b474878f12caaad"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09393e1b2a9461950b1c9a45d5fd251dc7c6f228acab64da1c9c0165d9c7765c"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:baacc6aee0b2ef6f3d308e197b5d7a81c0e70b06beae1f1fcacffdbd124fe0e3"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bf420121d4c8dce6b889f0e8e4ec0ca34b7f40186203f06a946fa0276ba54029"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:c04a46716adde8d927adb9457bbe39cf473e1e2c2f5d0a16ceb837e5d841ad4f"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:aaf63899c94de41fe3cf934601b0f7ccb6b428c6e4eeb80da72c58eab077b19a"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62e51710986674142526ab9f78663ca2b0726066ae26b78b22e0f5e571238dd"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-win32.whl", hash = "sha256:04e57ab9fbf9607b77f7d057974694b4f6b142da9ed4a199859d9d4d5c63fe96"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:48021783bdf96e3d6de03a6e39a1171ed5bd7e8bb93fc84cc649d11490f87cea"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4957669ef390f0e6719db3613ab3a7631e68424604a7b448f079bee145da6e09"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46fb8c61d794b78ec7134a715a3e564aafc8f6b5e338417cb19fe9f57a5a9bf2"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f779d3ad205f108d14e99bb3859aa7dd8e9c68874617c72354d7ecaec2a054ac"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f25c229a6ba38a35ae6e25ca1264621cc25d4d38dca2942a7fce0b67a4efe918"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2efb1bd13885392adfda4614c33d3b68dee4921fd0ac1d3988f8cbb7d589e72a"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f30b48dd7fa1474554b0b0f3fdfdd4c13b5c737a3c6284d3cdc424ec0ffff3a"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:246de67b99b6851627d945db38147d1b209a899311b1305dd84916f2b88526c6"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd9b3b31adcb054116447ea22caa61a285d92e94d710aa5ec97992ff5eb7cf3"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:8c2f5e83493748286002f9369f3e6607c565a6a90425a3a1fef5ae32a36d749d"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3170c9399da12c9dc66366e9d14da8bf7147e1e9d9ea566067bbce7bb74bd9c2"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7a4826ad2bd6b07ca615c74ab91f32f6c96d08f6fcc3902ceeedaec8cdc3bcd6"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:3b1613dd5aee995ec6d4c69f00378bbd07614702a315a2cf6c1d21461fe17c23"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9e608aafdb55eb9f255034709e20d5a83b6d60c054df0802fa9c9883d0a937aa"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-win32.whl", hash = "sha256:f2a1d0fd4242bd8643ce6f98927cf9c04540af6efa92323e9d3124f57727bfc1"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:681eb3d7e02e3c3655d1b16059fbfb605ac464c834a0c629048a30fad2b27489"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c57921cda3a80d0f2b8aec7e25c8aa14479ea92b5b51b6876d975d925a2ea346"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41b25eaa7d15909cf3ac4c96088c1f266a9a93ec44f87f1d13d4a0e86c81b982"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f058f6963fd82eb143c692cecdc89e075fa0828db2e5b291070485390b2f1c9c"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7647ebdfb9682b7bb97e2a5e7cb6ae735b1c25008a70b906aecca294ee96cf4"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eef9df1eefada2c09a5e7a40991b9fc6ac6ef20b1372abd48d2794a316dc0449"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e03b8895a6990c9ab2cdcd0f2fe44088ca1c65ae592b8f795c3294af00a461c3"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:ee4006268ed33370957f55bf2e6f4d263eaf4dc3cfc473d1d90baff6ed36ce4a"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c4983bf937209c57240cff65906b18bb35e64ae872da6a0db937d7b4af845dd7"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:3bb7fda7260735efe66d5107fb7e6af6a7c04c7fce9b2514e04b7a74b06bf5dd"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:72814c01533f51d68702802d74f77ea026b5ec52793c791e2da806a3844a46c3"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:70c610f6cbe4b9fce272c407dd9d07e33e6bf7b4aa1b7ffb6f6ded8e634e3592"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-win32.whl", hash = "sha256:a401b4598e5d3f4a9a811f3daf42ee2291790c7f9d74b18d75d6e21dda98a1a1"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c0b21078a4b56965e2b12f247467b234734491897e99c1d51cee628da9786959"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95eb302ff792e12aba9a8b8f8474ab229a83c103d74a750ec0bd1c1eea32e669"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1a100c6d595a7f316f1b6f01d20815d916e75ff98c27a01ae817439ea7726329"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6339d047dab2780cc6220f46306628e04d9750f02f983ddb37439ca47ced7149"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4b749b9cc6ee664a3300bb3a273c1ca8068c46be705b6c31cf5d276f8628a94"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a38856a971c602f98472050165cea2cdc97709240373041b69030be15047691f"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f87f746ee241d30d6ed93969de31e5ffd09a2961a051e60ae6bddde9ec3583aa"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89f1b185a01fe560bc8ae5f619e924407efca2191b56ce749ec84982fc59a32a"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e1c8a2f4c69e08e89632defbfabec2feb8a8d99edc9f89ce33c4b9e36ab63037"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2f4ac36d8e2b4cc1aa71df3dd84ff8efbe3bfb97ac41242fbcfc053c67434f46"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a386ebe437176aab38c041de1260cd3ea459c6ce5263594399880bbc398225b2"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:ccd16eb18a849fd8dcb23e23380e2f0a354e8daa0c984b8a732d9cfaba3a776d"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:e6a5bf2cba5ae1bb80b154ed68a3cfa2fa00fde979a7f50d6598d3e17d9ac20c"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:45de3f87179c1823e6d9e32156fb14c1927fcc9aba21433f088fdfb555b77c10"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-win32.whl", hash = "sha256:1000fba1057b92a65daec275aec30586c3de2401ccdcd41f8a5c1e2c87078706"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:8b2c760cfc7042b27ebdb4a43a4453bd829a5742503599144d54a032c5dc7e9e"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:855eafa5d5a2034b4621c74925d89c5efef61418570e5ef9b37717d9c796419c"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:203f0c8871d5a7987be20c72442488a0b8cfd0f43b7973771640fc593f56321f"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e857a2232ba53ae940d3456f7533ce6ca98b81917d47adc3c7fd55dad8fab858"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e86d77b090dbddbe78867a0275cb4df08ea195e660f1f7f13435a4649e954e5"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4fb39a81950ec280984b3a44f5bd12819953dc5fa3a7e6fa7a80db5ee853952"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2dee8e57f052ef5353cf608e0b4c871aee320dd1b87d351c28764fc0ca55f9f4"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8700f06d0ce6f128de3ccdbc1acaea1ee264d2caa9ca05daaf492fde7c2a7200"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1920d4ff15ce893210c1f0c0e9d19bfbecb7983c76b33f046c13a8ffbd570252"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c1c76a1743432b4b60ab3358c937a3fe1341c828ae6194108a94c69028247f22"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f7560358a6811e52e9c4d142d497f1a6e10103d3a6881f18d04dbce3729c0e2c"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:c8063cf17b19661471ecbdb3df1c84f24ad2e389e326ccaf89e3fb2484d8dd7e"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:cd6dbe0238f7743d0efe563ab46294f54f9bc8f4b9bcf57c3c666cc5bc9d1299"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1249cbbf3d3b04902ff081ffbb33ce3377fa6e4c7356f759f3cd076cc138d020"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-win32.whl", hash = "sha256:6c409c0deba34f147f77efaa67b8e4bb83d2f11c8806405f76397ae5b8c0d1c9"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:7095f6fbfaa55defb6b733cfeb14efaae7a29f0b59d8cf213be4e7ca0b857b80"}, + {file = "charset_normalizer-3.2.0-py3-none-any.whl", hash = "sha256:8e098148dd37b4ce3baca71fb394c81dc5d9c7728c95df695d2dca218edf40e6"}, +] + +[[package]] +name = "colorama" +version = "0.4.6" +description = "Cross-platform colored terminal text." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" +files = [ + {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, + {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, +] + +[[package]] +name = "colorlog" +version = "6.7.0" +description = "Add colours to the output of Python's logging module." +optional = false +python-versions = ">=3.6" +files = [ + {file = "colorlog-6.7.0-py2.py3-none-any.whl", hash = "sha256:0d33ca236784a1ba3ff9c532d4964126d8a2c44f1f0cb1d2b0728196f512f662"}, + {file = "colorlog-6.7.0.tar.gz", hash = "sha256:bd94bd21c1e13fac7bd3153f4bc3a7dc0eb0974b8bc2fdf1a989e474f6e582e5"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "sys_platform == \"win32\""} + +[package.extras] +development = ["black", "flake8", "mypy", "pytest", "types-colorama"] + +[[package]] +name = "comm" +version = "0.1.4" +description = "Jupyter Python Comm implementation, for usage in ipykernel, xeus-python etc." +optional = true +python-versions = ">=3.6" +files = [ + {file = "comm-0.1.4-py3-none-any.whl", hash = "sha256:6d52794cba11b36ed9860999cd10fd02d6b2eac177068fdd585e1e2f8a96e67a"}, + {file = "comm-0.1.4.tar.gz", hash = "sha256:354e40a59c9dd6db50c5cc6b4acc887d82e9603787f83b68c01a80a923984d15"}, +] + +[package.dependencies] +traitlets = ">=4" + +[package.extras] +lint = ["black (>=22.6.0)", "mdformat (>0.7)", "mdformat-gfm (>=0.3.5)", "ruff (>=0.0.156)"] +test = ["pytest"] +typing = ["mypy (>=0.990)"] + +[[package]] +name = "coverage" +version = "7.3.1" +description = "Code coverage measurement for Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "coverage-7.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd0f7429ecfd1ff597389907045ff209c8fdb5b013d38cfa7c60728cb484b6e3"}, + {file = "coverage-7.3.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:966f10df9b2b2115da87f50f6a248e313c72a668248be1b9060ce935c871f276"}, + {file = "coverage-7.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0575c37e207bb9b98b6cf72fdaaa18ac909fb3d153083400c2d48e2e6d28bd8e"}, + {file = "coverage-7.3.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:245c5a99254e83875c7fed8b8b2536f040997a9b76ac4c1da5bff398c06e860f"}, + {file = "coverage-7.3.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c96dd7798d83b960afc6c1feb9e5af537fc4908852ef025600374ff1a017392"}, + {file = "coverage-7.3.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:de30c1aa80f30af0f6b2058a91505ea6e36d6535d437520067f525f7df123887"}, + {file = "coverage-7.3.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:50dd1e2dd13dbbd856ffef69196781edff26c800a74f070d3b3e3389cab2600d"}, + {file = "coverage-7.3.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b9c0c19f70d30219113b18fe07e372b244fb2a773d4afde29d5a2f7930765136"}, + {file = "coverage-7.3.1-cp310-cp310-win32.whl", hash = "sha256:770f143980cc16eb601ccfd571846e89a5fe4c03b4193f2e485268f224ab602f"}, + {file = "coverage-7.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:cdd088c00c39a27cfa5329349cc763a48761fdc785879220d54eb785c8a38520"}, + {file = "coverage-7.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:74bb470399dc1989b535cb41f5ca7ab2af561e40def22d7e188e0a445e7639e3"}, + {file = "coverage-7.3.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:025ded371f1ca280c035d91b43252adbb04d2aea4c7105252d3cbc227f03b375"}, + {file = "coverage-7.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a6191b3a6ad3e09b6cfd75b45c6aeeffe7e3b0ad46b268345d159b8df8d835f9"}, + {file = "coverage-7.3.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7eb0b188f30e41ddd659a529e385470aa6782f3b412f860ce22b2491c89b8593"}, + {file = "coverage-7.3.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75c8f0df9dfd8ff745bccff75867d63ef336e57cc22b2908ee725cc552689ec8"}, + {file = "coverage-7.3.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7eb3cd48d54b9bd0e73026dedce44773214064be93611deab0b6a43158c3d5a0"}, + {file = "coverage-7.3.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:ac3c5b7e75acac31e490b7851595212ed951889918d398b7afa12736c85e13ce"}, + {file = "coverage-7.3.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5b4ee7080878077af0afa7238df1b967f00dc10763f6e1b66f5cced4abebb0a3"}, + {file = "coverage-7.3.1-cp311-cp311-win32.whl", hash = "sha256:229c0dd2ccf956bf5aeede7e3131ca48b65beacde2029f0361b54bf93d36f45a"}, + {file = "coverage-7.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:c6f55d38818ca9596dc9019eae19a47410d5322408140d9a0076001a3dcb938c"}, + {file = "coverage-7.3.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5289490dd1c3bb86de4730a92261ae66ea8d44b79ed3cc26464f4c2cde581fbc"}, + {file = "coverage-7.3.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ca833941ec701fda15414be400c3259479bfde7ae6d806b69e63b3dc423b1832"}, + {file = "coverage-7.3.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cd694e19c031733e446c8024dedd12a00cda87e1c10bd7b8539a87963685e969"}, + {file = "coverage-7.3.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aab8e9464c00da5cb9c536150b7fbcd8850d376d1151741dd0d16dfe1ba4fd26"}, + {file = "coverage-7.3.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87d38444efffd5b056fcc026c1e8d862191881143c3aa80bb11fcf9dca9ae204"}, + {file = "coverage-7.3.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:8a07b692129b8a14ad7a37941a3029c291254feb7a4237f245cfae2de78de037"}, + {file = "coverage-7.3.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:2829c65c8faaf55b868ed7af3c7477b76b1c6ebeee99a28f59a2cb5907a45760"}, + {file = "coverage-7.3.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:1f111a7d85658ea52ffad7084088277135ec5f368457275fc57f11cebb15607f"}, + {file = "coverage-7.3.1-cp312-cp312-win32.whl", hash = "sha256:c397c70cd20f6df7d2a52283857af622d5f23300c4ca8e5bd8c7a543825baa5a"}, + {file = "coverage-7.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:5ae4c6da8b3d123500f9525b50bf0168023313963e0e2e814badf9000dd6ef92"}, + {file = "coverage-7.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ca70466ca3a17460e8fc9cea7123c8cbef5ada4be3140a1ef8f7b63f2f37108f"}, + {file = "coverage-7.3.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f2781fd3cabc28278dc982a352f50c81c09a1a500cc2086dc4249853ea96b981"}, + {file = "coverage-7.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6407424621f40205bbe6325686417e5e552f6b2dba3535dd1f90afc88a61d465"}, + {file = "coverage-7.3.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:04312b036580ec505f2b77cbbdfb15137d5efdfade09156961f5277149f5e344"}, + {file = "coverage-7.3.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac9ad38204887349853d7c313f53a7b1c210ce138c73859e925bc4e5d8fc18e7"}, + {file = "coverage-7.3.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:53669b79f3d599da95a0afbef039ac0fadbb236532feb042c534fbb81b1a4e40"}, + {file = "coverage-7.3.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:614f1f98b84eb256e4f35e726bfe5ca82349f8dfa576faabf8a49ca09e630086"}, + {file = "coverage-7.3.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:f1a317fdf5c122ad642db8a97964733ab7c3cf6009e1a8ae8821089993f175ff"}, + {file = "coverage-7.3.1-cp38-cp38-win32.whl", hash = "sha256:defbbb51121189722420a208957e26e49809feafca6afeef325df66c39c4fdb3"}, + {file = "coverage-7.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:f4f456590eefb6e1b3c9ea6328c1e9fa0f1006e7481179d749b3376fc793478e"}, + {file = "coverage-7.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f12d8b11a54f32688b165fd1a788c408f927b0960984b899be7e4c190ae758f1"}, + {file = "coverage-7.3.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f09195dda68d94a53123883de75bb97b0e35f5f6f9f3aa5bf6e496da718f0cb6"}, + {file = "coverage-7.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6601a60318f9c3945be6ea0f2a80571f4299b6801716f8a6e4846892737ebe4"}, + {file = "coverage-7.3.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:07d156269718670d00a3b06db2288b48527fc5f36859425ff7cec07c6b367745"}, + {file = "coverage-7.3.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:636a8ac0b044cfeccae76a36f3b18264edcc810a76a49884b96dd744613ec0b7"}, + {file = "coverage-7.3.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5d991e13ad2ed3aced177f524e4d670f304c8233edad3210e02c465351f785a0"}, + {file = "coverage-7.3.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:586649ada7cf139445da386ab6f8ef00e6172f11a939fc3b2b7e7c9082052fa0"}, + {file = "coverage-7.3.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4aba512a15a3e1e4fdbfed2f5392ec221434a614cc68100ca99dcad7af29f3f8"}, + {file = "coverage-7.3.1-cp39-cp39-win32.whl", hash = "sha256:6bc6f3f4692d806831c136c5acad5ccedd0262aa44c087c46b7101c77e139140"}, + {file = "coverage-7.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:553d7094cb27db58ea91332e8b5681bac107e7242c23f7629ab1316ee73c4981"}, + {file = "coverage-7.3.1-pp38.pp39.pp310-none-any.whl", hash = "sha256:220eb51f5fb38dfdb7e5d54284ca4d0cd70ddac047d750111a68ab1798945194"}, + {file = "coverage-7.3.1.tar.gz", hash = "sha256:6cb7fe1581deb67b782c153136541e20901aa312ceedaf1467dcb35255787952"}, +] + +[package.dependencies] +tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""} + +[package.extras] +toml = ["tomli"] + +[[package]] +name = "cycler" +version = "0.11.0" +description = "Composable style cycles" +optional = true +python-versions = ">=3.6" +files = [ + {file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"}, + {file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"}, +] + +[[package]] +name = "debugpy" +version = "1.8.0" +description = "An implementation of the Debug Adapter Protocol for Python" +optional = true +python-versions = ">=3.8" +files = [ + {file = "debugpy-1.8.0-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:7fb95ca78f7ac43393cd0e0f2b6deda438ec7c5e47fa5d38553340897d2fbdfb"}, + {file = "debugpy-1.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef9ab7df0b9a42ed9c878afd3eaaff471fce3fa73df96022e1f5c9f8f8c87ada"}, + {file = "debugpy-1.8.0-cp310-cp310-win32.whl", hash = "sha256:a8b7a2fd27cd9f3553ac112f356ad4ca93338feadd8910277aff71ab24d8775f"}, + {file = "debugpy-1.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:5d9de202f5d42e62f932507ee8b21e30d49aae7e46d5b1dd5c908db1d7068637"}, + {file = "debugpy-1.8.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:ef54404365fae8d45cf450d0544ee40cefbcb9cb85ea7afe89a963c27028261e"}, + {file = "debugpy-1.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60009b132c91951354f54363f8ebdf7457aeb150e84abba5ae251b8e9f29a8a6"}, + {file = "debugpy-1.8.0-cp311-cp311-win32.whl", hash = "sha256:8cd0197141eb9e8a4566794550cfdcdb8b3db0818bdf8c49a8e8f8053e56e38b"}, + {file = "debugpy-1.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:a64093656c4c64dc6a438e11d59369875d200bd5abb8f9b26c1f5f723622e153"}, + {file = "debugpy-1.8.0-cp38-cp38-macosx_11_0_x86_64.whl", hash = "sha256:b05a6b503ed520ad58c8dc682749113d2fd9f41ffd45daec16e558ca884008cd"}, + {file = "debugpy-1.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3c6fb41c98ec51dd010d7ed650accfd07a87fe5e93eca9d5f584d0578f28f35f"}, + {file = "debugpy-1.8.0-cp38-cp38-win32.whl", hash = "sha256:46ab6780159eeabb43c1495d9c84cf85d62975e48b6ec21ee10c95767c0590aa"}, + {file = "debugpy-1.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:bdc5ef99d14b9c0fcb35351b4fbfc06ac0ee576aeab6b2511702e5a648a2e595"}, + {file = "debugpy-1.8.0-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:61eab4a4c8b6125d41a34bad4e5fe3d2cc145caecd63c3fe953be4cc53e65bf8"}, + {file = "debugpy-1.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:125b9a637e013f9faac0a3d6a82bd17c8b5d2c875fb6b7e2772c5aba6d082332"}, + {file = "debugpy-1.8.0-cp39-cp39-win32.whl", hash = "sha256:57161629133113c97b387382045649a2b985a348f0c9366e22217c87b68b73c6"}, + {file = "debugpy-1.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:e3412f9faa9ade82aa64a50b602544efcba848c91384e9f93497a458767e6926"}, + {file = "debugpy-1.8.0-py2.py3-none-any.whl", hash = "sha256:9c9b0ac1ce2a42888199df1a1906e45e6f3c9555497643a85e0bf2406e3ffbc4"}, + {file = "debugpy-1.8.0.zip", hash = "sha256:12af2c55b419521e33d5fb21bd022df0b5eb267c3e178f1d374a63a2a6bdccd0"}, +] + +[[package]] +name = "decorator" +version = "5.1.1" +description = "Decorators for Humans" +optional = false +python-versions = ">=3.5" +files = [ + {file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"}, + {file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"}, +] + +[[package]] +name = "defusedxml" +version = "0.7.1" +description = "XML bomb protection for Python stdlib modules" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +files = [ + {file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"}, + {file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"}, +] + +[[package]] +name = "distlib" +version = "0.3.7" +description = "Distribution utilities" +optional = false +python-versions = "*" +files = [ + {file = "distlib-0.3.7-py2.py3-none-any.whl", hash = "sha256:2e24928bc811348f0feb63014e97aaae3037f2cf48712d51ae61df7fd6075057"}, + {file = "distlib-0.3.7.tar.gz", hash = "sha256:9dafe54b34a028eafd95039d5e5d4851a13734540f1331060d31c9916e7147a8"}, +] + +[[package]] +name = "docutils" +version = "0.18.1" +description = "Docutils -- Python Documentation Utilities" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +files = [ + {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"}, + {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"}, +] + +[[package]] +name = "dom-toml" +version = "0.6.1" +description = "Dom's tools for Tom's Obvious, Minimal Language." +optional = false +python-versions = ">=3.6.1" +files = [ + {file = "dom_toml-0.6.1-py3-none-any.whl", hash = "sha256:ebdd69c571268dfa5a56b5085b5311583d8a8d2dc1811349e796160c9f36d501"}, + {file = "dom_toml-0.6.1.tar.gz", hash = "sha256:a0bfc204ae32c72ed36e526dce56108a3b20741ac3c055207206ce3b2f302868"}, +] + +[package.dependencies] +domdf-python-tools = ">=2.8.0" +toml = ">=0.10.2" + +[[package]] +name = "domdf-python-tools" +version = "3.6.1" +description = "Helpful functions for Python 🐍 🛠️" +optional = false +python-versions = ">=3.6" +files = [ + {file = "domdf_python_tools-3.6.1-py3-none-any.whl", hash = "sha256:e18158460850957f18e740eb94ede56f580ddb0cb162ab9d9834ed8bbb1b6431"}, + {file = "domdf_python_tools-3.6.1.tar.gz", hash = "sha256:acc04563d23bce4d437dd08af6b9bea788328c412772a044d8ca428a7ad861be"}, +] + +[package.dependencies] +importlib-metadata = {version = ">=3.6.0", markers = "python_version < \"3.9\""} +natsort = ">=7.0.1" +typing-extensions = ">=3.7.4.1" + +[package.extras] +all = ["pytz (>=2019.1)"] +dates = ["pytz (>=2019.1)"] + +[[package]] +name = "exceptiongroup" +version = "1.1.3" +description = "Backport of PEP 654 (exception groups)" +optional = false +python-versions = ">=3.7" +files = [ + {file = "exceptiongroup-1.1.3-py3-none-any.whl", hash = "sha256:343280667a4585d195ca1cf9cef84a4e178c4b6cf2274caef9859782b567d5e3"}, + {file = "exceptiongroup-1.1.3.tar.gz", hash = "sha256:097acd85d473d75af5bb98e41b61ff7fe35efe6675e4f9370ec6ec5126d160e9"}, +] + +[package.extras] +test = ["pytest (>=6)"] + +[[package]] +name = "executing" +version = "1.2.0" +description = "Get the currently executing AST node of a frame, and other information" +optional = false +python-versions = "*" +files = [ + {file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"}, + {file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"}, +] + +[package.extras] +tests = ["asttokens", "littleutils", "pytest", "rich"] + +[[package]] +name = "fastjsonschema" +version = "2.18.0" +description = "Fastest Python implementation of JSON schema" +optional = false +python-versions = "*" +files = [ + {file = "fastjsonschema-2.18.0-py3-none-any.whl", hash = "sha256:128039912a11a807068a7c87d0da36660afbfd7202780db26c4aa7153cfdc799"}, + {file = "fastjsonschema-2.18.0.tar.gz", hash = "sha256:e820349dd16f806e4bd1467a138dced9def4bc7d6213a34295272a6cac95b5bd"}, +] + +[package.extras] +devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"] + +[[package]] +name = "filelock" +version = "3.12.4" +description = "A platform independent file lock." +optional = false +python-versions = ">=3.8" +files = [ + {file = "filelock-3.12.4-py3-none-any.whl", hash = "sha256:08c21d87ded6e2b9da6728c3dff51baf1dcecf973b768ef35bcbc3447edb9ad4"}, + {file = "filelock-3.12.4.tar.gz", hash = "sha256:2e6f249f1f3654291606e046b09f1fd5eac39b360664c27f5aad072012f8bcbd"}, +] + +[package.extras] +docs = ["furo (>=2023.7.26)", "sphinx (>=7.1.2)", "sphinx-autodoc-typehints (>=1.24)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.3)", "diff-cover (>=7.7)", "pytest (>=7.4)", "pytest-cov (>=4.1)", "pytest-mock (>=3.11.1)", "pytest-timeout (>=2.1)"] +typing = ["typing-extensions (>=4.7.1)"] + +[[package]] +name = "fqdn" +version = "1.5.1" +description = "Validates fully-qualified domain names against RFC 1123, so that they are acceptable to modern bowsers" +optional = true +python-versions = ">=2.7, !=3.0, !=3.1, !=3.2, !=3.3, !=3.4, <4" +files = [ + {file = "fqdn-1.5.1-py3-none-any.whl", hash = "sha256:3a179af3761e4df6eb2e026ff9e1a3033d3587bf980a0b1b2e1e5d08d7358014"}, + {file = "fqdn-1.5.1.tar.gz", hash = "sha256:105ed3677e767fb5ca086a0c1f4bb66ebc3c100be518f0e0d755d9eae164d89f"}, +] + +[[package]] +name = "idna" +version = "3.4" +description = "Internationalized Domain Names in Applications (IDNA)" +optional = false +python-versions = ">=3.5" +files = [ + {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"}, + {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"}, +] + +[[package]] +name = "imagesize" +version = "1.4.1" +description = "Getting image size from png/jpeg/jpeg2000/gif file" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"}, + {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"}, +] + +[[package]] +name = "importlib-metadata" +version = "6.8.0" +description = "Read metadata from Python packages" +optional = false +python-versions = ">=3.8" +files = [ + {file = "importlib_metadata-6.8.0-py3-none-any.whl", hash = "sha256:3ebb78df84a805d7698245025b975d9d67053cd94c79245ba4b3eb694abe68bb"}, + {file = "importlib_metadata-6.8.0.tar.gz", hash = "sha256:dbace7892d8c0c4ac1ad096662232f831d4e64f4c4545bd53016a3e9d4654743"}, +] + +[package.dependencies] +zipp = ">=0.5" + +[package.extras] +docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +perf = ["ipython"] +testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)", "pytest-ruff"] + +[[package]] +name = "importlib-resources" +version = "6.1.0" +description = "Read resources from Python packages" +optional = false +python-versions = ">=3.8" +files = [ + {file = "importlib_resources-6.1.0-py3-none-any.whl", hash = "sha256:aa50258bbfa56d4e33fbd8aa3ef48ded10d1735f11532b8df95388cc6bdb7e83"}, + {file = "importlib_resources-6.1.0.tar.gz", hash = "sha256:9d48dcccc213325e810fd723e7fbb45ccb39f6cf5c31f00cf2b965f5f10f3cb9"}, +] + +[package.dependencies] +zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""} + +[package.extras] +docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] +testing = ["pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-ruff", "zipp (>=3.17)"] + +[[package]] +name = "iniconfig" +version = "2.0.0" +description = "brain-dead simple config-ini parsing" +optional = false +python-versions = ">=3.7" +files = [ + {file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"}, + {file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"}, +] + +[[package]] +name = "ipykernel" +version = "6.25.2" +description = "IPython Kernel for Jupyter" +optional = true +python-versions = ">=3.8" +files = [ + {file = "ipykernel-6.25.2-py3-none-any.whl", hash = "sha256:2e2ee359baba19f10251b99415bb39de1e97d04e1fab385646f24f0596510b77"}, + {file = "ipykernel-6.25.2.tar.gz", hash = "sha256:f468ddd1f17acb48c8ce67fcfa49ba6d46d4f9ac0438c1f441be7c3d1372230b"}, +] + +[package.dependencies] +appnope = {version = "*", markers = "platform_system == \"Darwin\""} +comm = ">=0.1.1" +debugpy = ">=1.6.5" +ipython = ">=7.23.1" +jupyter-client = ">=6.1.12" +jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0" +matplotlib-inline = ">=0.1" +nest-asyncio = "*" +packaging = "*" +psutil = "*" +pyzmq = ">=20" +tornado = ">=6.1" +traitlets = ">=5.4.0" + +[package.extras] +cov = ["coverage[toml]", "curio", "matplotlib", "pytest-cov", "trio"] +docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinx-autodoc-typehints", "sphinxcontrib-github-alt", "sphinxcontrib-spelling", "trio"] +pyqt5 = ["pyqt5"] +pyside6 = ["pyside6"] +test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-asyncio", "pytest-cov", "pytest-timeout"] + +[[package]] +name = "ipython" +version = "8.12.2" +description = "IPython: Productive Interactive Computing" +optional = false +python-versions = ">=3.8" +files = [ + {file = "ipython-8.12.2-py3-none-any.whl", hash = "sha256:ea8801f15dfe4ffb76dea1b09b847430ffd70d827b41735c64a0638a04103bfc"}, + {file = "ipython-8.12.2.tar.gz", hash = "sha256:c7b80eb7f5a855a88efc971fda506ff7a91c280b42cdae26643e0f601ea281ea"}, +] + +[package.dependencies] +appnope = {version = "*", markers = "sys_platform == \"darwin\""} +backcall = "*" +colorama = {version = "*", markers = "sys_platform == \"win32\""} +decorator = "*" +jedi = ">=0.16" +matplotlib-inline = "*" +pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""} +pickleshare = "*" +prompt-toolkit = ">=3.0.30,<3.0.37 || >3.0.37,<3.1.0" +pygments = ">=2.4.0" +stack-data = "*" +traitlets = ">=5" +typing-extensions = {version = "*", markers = "python_version < \"3.10\""} + +[package.extras] +all = ["black", "curio", "docrepr", "ipykernel", "ipyparallel", "ipywidgets", "matplotlib", "matplotlib (!=3.2.0)", "nbconvert", "nbformat", "notebook", "numpy (>=1.21)", "pandas", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "qtconsole", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "trio", "typing-extensions"] +black = ["black"] +doc = ["docrepr", "ipykernel", "matplotlib", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "typing-extensions"] +kernel = ["ipykernel"] +nbconvert = ["nbconvert"] +nbformat = ["nbformat"] +notebook = ["ipywidgets", "notebook"] +parallel = ["ipyparallel"] +qtconsole = ["qtconsole"] +test = ["pytest (<7.1)", "pytest-asyncio", "testpath"] +test-extra = ["curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.21)", "pandas", "pytest (<7.1)", "pytest-asyncio", "testpath", "trio"] + +[[package]] +name = "ipython-genutils" +version = "0.2.0" +description = "Vestigial utilities from IPython" +optional = true +python-versions = "*" +files = [ + {file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"}, + {file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"}, +] + +[[package]] +name = "ipywidgets" +version = "8.1.1" +description = "Jupyter interactive widgets" +optional = true +python-versions = ">=3.7" +files = [ + {file = "ipywidgets-8.1.1-py3-none-any.whl", hash = "sha256:2b88d728656aea3bbfd05d32c747cfd0078f9d7e159cf982433b58ad717eed7f"}, + {file = "ipywidgets-8.1.1.tar.gz", hash = "sha256:40211efb556adec6fa450ccc2a77d59ca44a060f4f9f136833df59c9f538e6e8"}, +] + +[package.dependencies] +comm = ">=0.1.3" +ipython = ">=6.1.0" +jupyterlab-widgets = ">=3.0.9,<3.1.0" +traitlets = ">=4.3.1" +widgetsnbextension = ">=4.0.9,<4.1.0" + +[package.extras] +test = ["ipykernel", "jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"] + +[[package]] +name = "isoduration" +version = "20.11.0" +description = "Operations with ISO 8601 durations" +optional = true +python-versions = ">=3.7" +files = [ + {file = "isoduration-20.11.0-py3-none-any.whl", hash = "sha256:b2904c2a4228c3d44f409c8ae8e2370eb21a26f7ac2ec5446df141dde3452042"}, + {file = "isoduration-20.11.0.tar.gz", hash = "sha256:ac2f9015137935279eac671f94f89eb00584f940f5dc49462a0c4ee692ba1bd9"}, +] + +[package.dependencies] +arrow = ">=0.15.0" + +[[package]] +name = "jedi" +version = "0.19.0" +description = "An autocompletion tool for Python that can be used for text editors." +optional = false +python-versions = ">=3.6" +files = [ + {file = "jedi-0.19.0-py2.py3-none-any.whl", hash = "sha256:cb8ce23fbccff0025e9386b5cf85e892f94c9b822378f8da49970471335ac64e"}, + {file = "jedi-0.19.0.tar.gz", hash = "sha256:bcf9894f1753969cbac8022a8c2eaee06bfa3724e4192470aaffe7eb6272b0c4"}, +] + +[package.dependencies] +parso = ">=0.8.3,<0.9.0" + +[package.extras] +docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"] +qa = ["flake8 (==5.0.4)", "mypy (==0.971)", "types-setuptools (==67.2.0.1)"] +testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"] + +[[package]] +name = "jinja2" +version = "3.1.2" +description = "A very fast and expressive template engine." +optional = false +python-versions = ">=3.7" +files = [ + {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"}, + {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"}, +] + +[package.dependencies] +MarkupSafe = ">=2.0" + +[package.extras] +i18n = ["Babel (>=2.7)"] + +[[package]] +name = "joblib" +version = "1.3.2" +description = "Lightweight pipelining with Python functions" +optional = false +python-versions = ">=3.7" +files = [ + {file = "joblib-1.3.2-py3-none-any.whl", hash = "sha256:ef4331c65f239985f3f2220ecc87db222f08fd22097a3dd5698f693875f8cbb9"}, + {file = "joblib-1.3.2.tar.gz", hash = "sha256:92f865e621e17784e7955080b6d042489e3b8e294949cc44c6eac304f59772b1"}, +] + +[[package]] +name = "json5" +version = "0.9.14" +description = "A Python implementation of the JSON5 data format." +optional = true +python-versions = "*" +files = [ + {file = "json5-0.9.14-py2.py3-none-any.whl", hash = "sha256:740c7f1b9e584a468dbb2939d8d458db3427f2c93ae2139d05f47e453eae964f"}, + {file = "json5-0.9.14.tar.gz", hash = "sha256:9ed66c3a6ca3510a976a9ef9b8c0787de24802724ab1860bc0153c7fdd589b02"}, +] + +[package.extras] +dev = ["hypothesis"] + +[[package]] +name = "jsonpointer" +version = "2.4" +description = "Identify specific nodes in a JSON document (RFC 6901)" +optional = true +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*" +files = [ + {file = "jsonpointer-2.4-py2.py3-none-any.whl", hash = "sha256:15d51bba20eea3165644553647711d150376234112651b4f1811022aecad7d7a"}, + {file = "jsonpointer-2.4.tar.gz", hash = "sha256:585cee82b70211fa9e6043b7bb89db6e1aa49524340dde8ad6b63206ea689d88"}, +] + +[[package]] +name = "jsonschema" +version = "4.19.1" +description = "An implementation of JSON Schema validation for Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "jsonschema-4.19.1-py3-none-any.whl", hash = "sha256:cd5f1f9ed9444e554b38ba003af06c0a8c2868131e56bfbef0550fb450c0330e"}, + {file = "jsonschema-4.19.1.tar.gz", hash = "sha256:ec84cc37cfa703ef7cd4928db24f9cb31428a5d0fa77747b8b51a847458e0bbf"}, +] + +[package.dependencies] +attrs = ">=22.2.0" +fqdn = {version = "*", optional = true, markers = "extra == \"format-nongpl\""} +idna = {version = "*", optional = true, markers = "extra == \"format-nongpl\""} +importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""} +isoduration = {version = "*", optional = true, markers = "extra == \"format-nongpl\""} +jsonpointer = {version = ">1.13", optional = true, markers = "extra == \"format-nongpl\""} +jsonschema-specifications = ">=2023.03.6" +pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""} +referencing = ">=0.28.4" +rfc3339-validator = {version = "*", optional = true, markers = "extra == \"format-nongpl\""} +rfc3986-validator = {version = ">0.1.0", optional = true, markers = "extra == \"format-nongpl\""} +rpds-py = ">=0.7.1" +uri-template = {version = "*", optional = true, markers = "extra == \"format-nongpl\""} +webcolors = {version = ">=1.11", optional = true, markers = "extra == \"format-nongpl\""} + +[package.extras] +format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"] +format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"] + +[[package]] +name = "jsonschema-specifications" +version = "2023.7.1" +description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry" +optional = false +python-versions = ">=3.8" +files = [ + {file = "jsonschema_specifications-2023.7.1-py3-none-any.whl", hash = "sha256:05adf340b659828a004220a9613be00fa3f223f2b82002e273dee62fd50524b1"}, + {file = "jsonschema_specifications-2023.7.1.tar.gz", hash = "sha256:c91a50404e88a1f6ba40636778e2ee08f6e24c5613fe4c53ac24578a5a7f72bb"}, +] + +[package.dependencies] +importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""} +referencing = ">=0.28.0" + +[[package]] +name = "jupyter" +version = "1.0.0" +description = "Jupyter metapackage. Install all the Jupyter components in one go." +optional = true +python-versions = "*" +files = [ + {file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"}, + {file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"}, + {file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"}, +] + +[package.dependencies] +ipykernel = "*" +ipywidgets = "*" +jupyter-console = "*" +nbconvert = "*" +notebook = "*" +qtconsole = "*" + +[[package]] +name = "jupyter-client" +version = "8.3.1" +description = "Jupyter protocol implementation and client libraries" +optional = false +python-versions = ">=3.8" +files = [ + {file = "jupyter_client-8.3.1-py3-none-any.whl", hash = "sha256:5eb9f55eb0650e81de6b7e34308d8b92d04fe4ec41cd8193a913979e33d8e1a5"}, + {file = "jupyter_client-8.3.1.tar.gz", hash = "sha256:60294b2d5b869356c893f57b1a877ea6510d60d45cf4b38057f1672d85699ac9"}, +] + +[package.dependencies] +importlib-metadata = {version = ">=4.8.3", markers = "python_version < \"3.10\""} +jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0" +python-dateutil = ">=2.8.2" +pyzmq = ">=23.0" +tornado = ">=6.2" +traitlets = ">=5.3" + +[package.extras] +docs = ["ipykernel", "myst-parser", "pydata-sphinx-theme", "sphinx (>=4)", "sphinx-autodoc-typehints", "sphinxcontrib-github-alt", "sphinxcontrib-spelling"] +test = ["coverage", "ipykernel (>=6.14)", "mypy", "paramiko", "pre-commit", "pytest", "pytest-cov", "pytest-jupyter[client] (>=0.4.1)", "pytest-timeout"] + +[[package]] +name = "jupyter-console" +version = "6.6.3" +description = "Jupyter terminal console" +optional = true +python-versions = ">=3.7" +files = [ + {file = "jupyter_console-6.6.3-py3-none-any.whl", hash = "sha256:309d33409fcc92ffdad25f0bcdf9a4a9daa61b6f341177570fdac03de5352485"}, + {file = "jupyter_console-6.6.3.tar.gz", hash = "sha256:566a4bf31c87adbfadf22cdf846e3069b59a71ed5da71d6ba4d8aaad14a53539"}, +] + +[package.dependencies] +ipykernel = ">=6.14" +ipython = "*" +jupyter-client = ">=7.0.0" +jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0" +prompt-toolkit = ">=3.0.30" +pygments = "*" +pyzmq = ">=17" +traitlets = ">=5.4" + +[package.extras] +test = ["flaky", "pexpect", "pytest"] + +[[package]] +name = "jupyter-core" +version = "5.3.1" +description = "Jupyter core package. A base package on which Jupyter projects rely." +optional = false +python-versions = ">=3.8" +files = [ + {file = "jupyter_core-5.3.1-py3-none-any.whl", hash = "sha256:ae9036db959a71ec1cac33081eeb040a79e681f08ab68b0883e9a676c7a90dce"}, + {file = "jupyter_core-5.3.1.tar.gz", hash = "sha256:5ba5c7938a7f97a6b0481463f7ff0dbac7c15ba48cf46fa4035ca6e838aa1aba"}, +] + +[package.dependencies] +platformdirs = ">=2.5" +pywin32 = {version = ">=300", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""} +traitlets = ">=5.3" + +[package.extras] +docs = ["myst-parser", "sphinx-autodoc-typehints", "sphinxcontrib-github-alt", "sphinxcontrib-spelling", "traitlets"] +test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"] + +[[package]] +name = "jupyter-events" +version = "0.7.0" +description = "Jupyter Event System library" +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyter_events-0.7.0-py3-none-any.whl", hash = "sha256:4753da434c13a37c3f3c89b500afa0c0a6241633441421f6adafe2fb2e2b924e"}, + {file = "jupyter_events-0.7.0.tar.gz", hash = "sha256:7be27f54b8388c03eefea123a4f79247c5b9381c49fb1cd48615ee191eb12615"}, +] + +[package.dependencies] +jsonschema = {version = ">=4.18.0", extras = ["format-nongpl"]} +python-json-logger = ">=2.0.4" +pyyaml = ">=5.3" +referencing = "*" +rfc3339-validator = "*" +rfc3986-validator = ">=0.1.1" +traitlets = ">=5.3" + +[package.extras] +cli = ["click", "rich"] +docs = ["jupyterlite-sphinx", "myst-parser", "pydata-sphinx-theme", "sphinxcontrib-spelling"] +test = ["click", "pre-commit", "pytest (>=7.0)", "pytest-asyncio (>=0.19.0)", "pytest-console-scripts", "rich"] + +[[package]] +name = "jupyter-lsp" +version = "2.2.0" +description = "Multi-Language Server WebSocket proxy for Jupyter Notebook/Lab server" +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyter-lsp-2.2.0.tar.gz", hash = "sha256:8ebbcb533adb41e5d635eb8fe82956b0aafbf0fd443b6c4bfa906edeeb8635a1"}, + {file = "jupyter_lsp-2.2.0-py3-none-any.whl", hash = "sha256:9e06b8b4f7dd50300b70dd1a78c0c3b0c3d8fa68e0f2d8a5d1fbab62072aca3f"}, +] + +[package.dependencies] +importlib-metadata = {version = ">=4.8.3", markers = "python_version < \"3.10\""} +jupyter-server = ">=1.1.2" + +[[package]] +name = "jupyter-server" +version = "2.7.3" +description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications." +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyter_server-2.7.3-py3-none-any.whl", hash = "sha256:8e4b90380b59d7a1e31086c4692231f2a2ea4cb269f5516e60aba72ce8317fc9"}, + {file = "jupyter_server-2.7.3.tar.gz", hash = "sha256:d4916c8581c4ebbc534cebdaa8eca2478d9f3bfdd88eae29fcab0120eac57649"}, +] + +[package.dependencies] +anyio = ">=3.1.0" +argon2-cffi = "*" +jinja2 = "*" +jupyter-client = ">=7.4.4" +jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0" +jupyter-events = ">=0.6.0" +jupyter-server-terminals = "*" +nbconvert = ">=6.4.4" +nbformat = ">=5.3.0" +overrides = "*" +packaging = "*" +prometheus-client = "*" +pywinpty = {version = "*", markers = "os_name == \"nt\""} +pyzmq = ">=24" +send2trash = ">=1.8.2" +terminado = ">=0.8.3" +tornado = ">=6.2.0" +traitlets = ">=5.6.0" +websocket-client = "*" + +[package.extras] +docs = ["ipykernel", "jinja2", "jupyter-client", "jupyter-server", "myst-parser", "nbformat", "prometheus-client", "pydata-sphinx-theme", "send2trash", "sphinx-autodoc-typehints", "sphinxcontrib-github-alt", "sphinxcontrib-openapi (>=0.8.0)", "sphinxcontrib-spelling", "sphinxemoji", "tornado", "typing-extensions"] +test = ["flaky", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-jupyter[server] (>=0.4)", "pytest-timeout", "requests"] + +[[package]] +name = "jupyter-server-terminals" +version = "0.4.4" +description = "A Jupyter Server Extension Providing Terminals." +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyter_server_terminals-0.4.4-py3-none-any.whl", hash = "sha256:75779164661cec02a8758a5311e18bb8eb70c4e86c6b699403100f1585a12a36"}, + {file = "jupyter_server_terminals-0.4.4.tar.gz", hash = "sha256:57ab779797c25a7ba68e97bcfb5d7740f2b5e8a83b5e8102b10438041a7eac5d"}, +] + +[package.dependencies] +pywinpty = {version = ">=2.0.3", markers = "os_name == \"nt\""} +terminado = ">=0.8.3" + +[package.extras] +docs = ["jinja2", "jupyter-server", "mistune (<3.0)", "myst-parser", "nbformat", "packaging", "pydata-sphinx-theme", "sphinxcontrib-github-alt", "sphinxcontrib-openapi", "sphinxcontrib-spelling", "sphinxemoji", "tornado"] +test = ["coverage", "jupyter-server (>=2.0.0)", "pytest (>=7.0)", "pytest-cov", "pytest-jupyter[server] (>=0.5.3)", "pytest-timeout"] + +[[package]] +name = "jupyterlab" +version = "4.0.6" +description = "JupyterLab computational environment" +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyterlab-4.0.6-py3-none-any.whl", hash = "sha256:7d9dacad1e3f30fe4d6d4efc97fda25fbb5012012b8f27cc03a2283abcdee708"}, + {file = "jupyterlab-4.0.6.tar.gz", hash = "sha256:6c43ae5a6a1fd2fdfafcb3454004958bde6da76331abb44cffc6f9e436b19ba1"}, +] + +[package.dependencies] +async-lru = ">=1.0.0" +importlib-metadata = {version = ">=4.8.3", markers = "python_version < \"3.10\""} +importlib-resources = {version = ">=1.4", markers = "python_version < \"3.9\""} +ipykernel = "*" +jinja2 = ">=3.0.3" +jupyter-core = "*" +jupyter-lsp = ">=2.0.0" +jupyter-server = ">=2.4.0,<3" +jupyterlab-server = ">=2.19.0,<3" +notebook-shim = ">=0.2" +packaging = "*" +tomli = {version = "*", markers = "python_version < \"3.11\""} +tornado = ">=6.2.0" +traitlets = "*" + +[package.extras] +dev = ["black[jupyter] (==23.7.0)", "build", "bump2version", "coverage", "hatch", "pre-commit", "pytest-cov", "ruff (==0.0.286)"] +docs = ["jsx-lexer", "myst-parser", "pydata-sphinx-theme (>=0.13.0)", "pytest", "pytest-check-links", "pytest-tornasync", "sphinx (>=1.8,<7.2.0)", "sphinx-copybutton"] +docs-screenshots = ["altair (==5.0.1)", "ipython (==8.14.0)", "ipywidgets (==8.0.6)", "jupyterlab-geojson (==3.4.0)", "jupyterlab-language-pack-zh-cn (==4.0.post0)", "matplotlib (==3.7.1)", "nbconvert (>=7.0.0)", "pandas (==2.0.2)", "scipy (==1.10.1)", "vega-datasets (==0.9.0)"] +test = ["coverage", "pytest (>=7.0)", "pytest-check-links (>=0.7)", "pytest-console-scripts", "pytest-cov", "pytest-jupyter (>=0.5.3)", "pytest-timeout", "pytest-tornasync", "requests", "requests-cache", "virtualenv"] + +[[package]] +name = "jupyterlab-pygments" +version = "0.2.2" +description = "Pygments theme using JupyterLab CSS variables" +optional = false +python-versions = ">=3.7" +files = [ + {file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"}, + {file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"}, +] + +[[package]] +name = "jupyterlab-server" +version = "2.25.0" +description = "A set of server components for JupyterLab and JupyterLab like applications." +optional = true +python-versions = ">=3.8" +files = [ + {file = "jupyterlab_server-2.25.0-py3-none-any.whl", hash = "sha256:c9f67a98b295c5dee87f41551b0558374e45d449f3edca153dd722140630dcb2"}, + {file = "jupyterlab_server-2.25.0.tar.gz", hash = "sha256:77c2f1f282d610f95e496e20d5bf1d2a7706826dfb7b18f3378ae2870d272fb7"}, +] + +[package.dependencies] +babel = ">=2.10" +importlib-metadata = {version = ">=4.8.3", markers = "python_version < \"3.10\""} +jinja2 = ">=3.0.3" +json5 = ">=0.9.0" +jsonschema = ">=4.18.0" +jupyter-server = ">=1.21,<3" +packaging = ">=21.3" +requests = ">=2.31" + +[package.extras] +docs = ["autodoc-traits", "jinja2 (<3.2.0)", "mistune (<4)", "myst-parser", "pydata-sphinx-theme", "sphinx", "sphinx-copybutton", "sphinxcontrib-openapi (>0.8)"] +openapi = ["openapi-core (>=0.18.0,<0.19.0)", "ruamel-yaml"] +test = ["hatch", "ipykernel", "openapi-core (>=0.18.0,<0.19.0)", "openapi-spec-validator (>=0.6.0,<0.7.0)", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-jupyter[server] (>=0.6.2)", "pytest-timeout", "requests-mock", "ruamel-yaml", "sphinxcontrib-spelling", "strict-rfc3339", "werkzeug"] + +[[package]] +name = "jupyterlab-widgets" +version = "3.0.9" +description = "Jupyter interactive widgets for JupyterLab" +optional = true +python-versions = ">=3.7" +files = [ + {file = "jupyterlab_widgets-3.0.9-py3-none-any.whl", hash = "sha256:3cf5bdf5b897bf3bccf1c11873aa4afd776d7430200f765e0686bd352487b58d"}, + {file = "jupyterlab_widgets-3.0.9.tar.gz", hash = "sha256:6005a4e974c7beee84060fdfba341a3218495046de8ae3ec64888e5fe19fdb4c"}, +] + +[[package]] +name = "mako" +version = "1.2.4" +description = "A super-fast templating language that borrows the best ideas from the existing templating languages." +optional = true +python-versions = ">=3.7" +files = [ + {file = "Mako-1.2.4-py3-none-any.whl", hash = "sha256:c97c79c018b9165ac9922ae4f32da095ffd3c4e6872b45eded42926deea46818"}, + {file = "Mako-1.2.4.tar.gz", hash = "sha256:d60a3903dc3bb01a18ad6a89cdbe2e4eadc69c0bc8ef1e3773ba53d44c3f7a34"}, +] + +[package.dependencies] +MarkupSafe = ">=0.9.2" + +[package.extras] +babel = ["Babel"] +lingua = ["lingua"] +testing = ["pytest"] + +[[package]] +name = "markupsafe" +version = "2.1.3" +description = "Safely add untrusted strings to HTML/XML markup." +optional = false +python-versions = ">=3.7" +files = [ + {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"}, + {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"}, +] + +[[package]] +name = "matplotlib" +version = "1.5.3" +description = "Python plotting package" +optional = true +python-versions = "*" +files = [ + {file = "matplotlib-1.5.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl", hash = "sha256:29f6b1351560af1ea34986b327328ccc382050748a4540ac11541419e1922b53"}, + {file = "matplotlib-1.5.3-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:86156ce9ce01977b6a02bd8e5680ad5a8811d06375ac186ec69aae346136ffd8"}, + {file = "matplotlib-1.5.3-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:2fb6ec96e4e851f537e81a3586e9bd8f1c3fd3af6696fa24e191fb5dfec326fa"}, + {file = "matplotlib-1.5.3-cp27-cp27m-win32.whl", hash = "sha256:6c284651db271821158d7d1fb029945371eaec79dbfa29fd7e93de79d1f52d79"}, + {file = "matplotlib-1.5.3-cp27-cp27m-win_amd64.whl", hash = "sha256:743e6f7bc75bdc18b49fcafd00539c7bb18babb3abd9d0d8eda2038643f6dc86"}, + {file = "matplotlib-1.5.3-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:c1abc3521645d89d61ca7c7bc5aaf0cd4623b36890d4d7a7c1388f1f334db6af"}, + {file = "matplotlib-1.5.3-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:fab31bc9aa99b634471807d79f019cd12b8bab99f16b2575d8fc04229d66f168"}, + {file = "matplotlib-1.5.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl", hash = "sha256:cf42c86fc1243e8f7d0430225319adf31499d80d2fb4e225d451f85db3e23d6a"}, + {file = "matplotlib-1.5.3-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:74200cb3fde281d772bbe0f7c9b8efb3d7b1faa77b1709d838e187b3b4fdf50d"}, + {file = "matplotlib-1.5.3-cp34-cp34m-win32.whl", hash = "sha256:576867bdf33bfa7f6e161f1b44184bc220afa24aa67f383a3996a1ca3eea5d9c"}, + {file = "matplotlib-1.5.3-cp34-cp34m-win_amd64.whl", hash = "sha256:23af332bea034f1f69086637d550d9a6717b09f2030a4c5094eb82d6fda9b0de"}, + {file = "matplotlib-1.5.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl", hash = "sha256:7fb9616444b876c2c964d0d90597e3fb717cb0ff0a4b478199fe39041ef78ccd"}, + {file = "matplotlib-1.5.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:713b0e6eb876070832f7102b27bb4ff867eba54d8aca5116b81af11e39f6c0b2"}, + {file = "matplotlib-1.5.3-cp35-cp35m-win32.whl", hash = "sha256:eff5bc7f02b7c1afc2a36f83af10c3d6bdac8fc07d1dfc01e48d04e46720f8b9"}, + {file = "matplotlib-1.5.3-cp35-cp35m-win_amd64.whl", hash = "sha256:ad635db9def26a514337aac719ed74f5064bd36d552cb8d547e18335da26af85"}, + {file = "matplotlib-1.5.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl", hash = "sha256:683b586474c0337c8303f41941417fa8c334626d42f19282c476065ce0a96b1c"}, + {file = "matplotlib-1.5.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:74daa74d6f4989c99c87070fbc8348309fef6ea43a4214e2d5dd92479f80825e"}, + {file = "matplotlib-1.5.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f094e9e19973c371c56ab43a53ab1624edd96101ff407f9230401a503c877c44"}, + {file = "matplotlib-1.5.3-cp36-cp36m-win32.whl", hash = "sha256:1fbcbac5c3d90ce233fbb371bb49a8c2da43f54e20d196406cf37843e24582c3"}, + {file = "matplotlib-1.5.3-cp36-cp36m-win_amd64.whl", hash = "sha256:98e7a6cc386e4c04b3340a2641376fe779621303c2802ef385fca3289dcd0e16"}, + {file = "matplotlib-1.5.3.tar.gz", hash = "sha256:a0a5dc39f785014f2088fed2c6d2d129f0444f71afbb9c44f7bdf1b14d86ebbc"}, +] + +[package.dependencies] +cycler = "*" +numpy = ">=1.6" +pyparsing = ">=1.5.6,<2.0.0 || >2.0.0,<2.0.4 || >2.0.4,<2.1.2 || >2.1.2" +python-dateutil = "*" +pytz = "*" + +[[package]] +name = "matplotlib-inline" +version = "0.1.6" +description = "Inline Matplotlib backend for Jupyter" +optional = false +python-versions = ">=3.5" +files = [ + {file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"}, + {file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"}, +] + +[package.dependencies] +traitlets = "*" + +[[package]] +name = "mistune" +version = "3.0.1" +description = "A sane and fast Markdown parser with useful plugins and renderers" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mistune-3.0.1-py3-none-any.whl", hash = "sha256:b9b3e438efbb57c62b5beb5e134dab664800bdf1284a7ee09e8b12b13eb1aac6"}, + {file = "mistune-3.0.1.tar.gz", hash = "sha256:e912116c13aa0944f9dc530db38eb88f6a77087ab128f49f84a48f4c05ea163c"}, +] + +[[package]] +name = "mock" +version = "2.0.0" +description = "Rolling backport of unittest.mock for all Pythons" +optional = false +python-versions = "*" +files = [ + {file = "mock-2.0.0-py2.py3-none-any.whl", hash = "sha256:5ce3c71c5545b472da17b72268978914d0252980348636840bd34a00b5cc96c1"}, + {file = "mock-2.0.0.tar.gz", hash = "sha256:b158b6df76edd239b8208d481dc46b6afd45a846b7812ff0ce58971cf5bc8bba"}, +] + +[package.dependencies] +pbr = ">=0.11" +six = ">=1.9" + +[package.extras] +docs = ["Pygments (<2)", "jinja2 (<2.7)", "sphinx", "sphinx (<1.3)"] +test = ["unittest2 (>=1.1.0)"] + +[[package]] +name = "natsort" +version = "8.4.0" +description = "Simple yet flexible natural sorting in Python." +optional = false +python-versions = ">=3.7" +files = [ + {file = "natsort-8.4.0-py3-none-any.whl", hash = "sha256:4732914fb471f56b5cce04d7bae6f164a592c7712e1c85f9ef585e197299521c"}, + {file = "natsort-8.4.0.tar.gz", hash = "sha256:45312c4a0e5507593da193dedd04abb1469253b601ecaf63445ad80f0a1ea581"}, +] + +[package.extras] +fast = ["fastnumbers (>=2.0.0)"] +icu = ["PyICU (>=1.0.0)"] + +[[package]] +name = "nbclient" +version = "0.8.0" +description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor." +optional = false +python-versions = ">=3.8.0" +files = [ + {file = "nbclient-0.8.0-py3-none-any.whl", hash = "sha256:25e861299e5303a0477568557c4045eccc7a34c17fc08e7959558707b9ebe548"}, + {file = "nbclient-0.8.0.tar.gz", hash = "sha256:f9b179cd4b2d7bca965f900a2ebf0db4a12ebff2f36a711cb66861e4ae158e55"}, +] + +[package.dependencies] +jupyter-client = ">=6.1.12" +jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0" +nbformat = ">=5.1" +traitlets = ">=5.4" + +[package.extras] +dev = ["pre-commit"] +docs = ["autodoc-traits", "mock", "moto", "myst-parser", "nbclient[test]", "sphinx (>=1.7)", "sphinx-book-theme", "sphinxcontrib-spelling"] +test = ["flaky", "ipykernel (>=6.19.3)", "ipython", "ipywidgets", "nbconvert (>=7.0.0)", "pytest (>=7.0)", "pytest-asyncio", "pytest-cov (>=4.0)", "testpath", "xmltodict"] + +[[package]] +name = "nbconvert" +version = "7.8.0" +description = "Converting Jupyter Notebooks" +optional = false +python-versions = ">=3.8" +files = [ + {file = "nbconvert-7.8.0-py3-none-any.whl", hash = "sha256:aec605e051fa682ccc7934ccc338ba1e8b626cfadbab0db592106b630f63f0f2"}, + {file = "nbconvert-7.8.0.tar.gz", hash = "sha256:f5bc15a1247e14dd41ceef0c0a3bc70020e016576eb0578da62f1c5b4f950479"}, +] + +[package.dependencies] +beautifulsoup4 = "*" +bleach = "!=5.0.0" +defusedxml = "*" +importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""} +jinja2 = ">=3.0" +jupyter-core = ">=4.7" +jupyterlab-pygments = "*" +markupsafe = ">=2.0" +mistune = ">=2.0.3,<4" +nbclient = ">=0.5.0" +nbformat = ">=5.7" +packaging = "*" +pandocfilters = ">=1.4.1" +pygments = ">=2.4.1" +tinycss2 = "*" +traitlets = ">=5.1" + +[package.extras] +all = ["nbconvert[docs,qtpdf,serve,test,webpdf]"] +docs = ["ipykernel", "ipython", "myst-parser", "nbsphinx (>=0.2.12)", "pydata-sphinx-theme", "sphinx (==5.0.2)", "sphinxcontrib-spelling"] +qtpdf = ["nbconvert[qtpng]"] +qtpng = ["pyqtwebengine (>=5.15)"] +serve = ["tornado (>=6.1)"] +test = ["flaky", "ipykernel", "ipywidgets (>=7)", "pre-commit", "pytest", "pytest-dependency"] +webpdf = ["playwright"] + +[[package]] +name = "nbformat" +version = "5.9.2" +description = "The Jupyter Notebook format" +optional = false +python-versions = ">=3.8" +files = [ + {file = "nbformat-5.9.2-py3-none-any.whl", hash = "sha256:1c5172d786a41b82bcfd0c23f9e6b6f072e8fb49c39250219e4acfff1efe89e9"}, + {file = "nbformat-5.9.2.tar.gz", hash = "sha256:5f98b5ba1997dff175e77e0c17d5c10a96eaed2cbd1de3533d1fc35d5e111192"}, +] + +[package.dependencies] +fastjsonschema = "*" +jsonschema = ">=2.6" +jupyter-core = "*" +traitlets = ">=5.1" + +[package.extras] +docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt", "sphinxcontrib-spelling"] +test = ["pep440", "pre-commit", "pytest", "testpath"] + +[[package]] +name = "nbsphinx" +version = "0.9.3" +description = "Jupyter Notebook Tools for Sphinx" +optional = false +python-versions = ">=3.6" +files = [ + {file = "nbsphinx-0.9.3-py3-none-any.whl", hash = "sha256:6e805e9627f4a358bd5720d5cbf8bf48853989c79af557afd91a5f22e163029f"}, + {file = "nbsphinx-0.9.3.tar.gz", hash = "sha256:ec339c8691b688f8676104a367a4b8cf3ea01fd089dc28d24dec22d563b11562"}, +] + +[package.dependencies] +docutils = "*" +jinja2 = "*" +nbconvert = "!=5.4" +nbformat = "*" +sphinx = ">=1.8" +traitlets = ">=5" + +[[package]] +name = "nest-asyncio" +version = "1.5.8" +description = "Patch asyncio to allow nested event loops" +optional = true +python-versions = ">=3.5" +files = [ + {file = "nest_asyncio-1.5.8-py3-none-any.whl", hash = "sha256:accda7a339a70599cb08f9dd09a67e0c2ef8d8d6f4c07f96ab203f2ae254e48d"}, + {file = "nest_asyncio-1.5.8.tar.gz", hash = "sha256:25aa2ca0d2a5b5531956b9e273b45cf664cae2b145101d73b86b199978d48fdb"}, +] + +[[package]] +name = "notebook" +version = "7.0.4" +description = "Jupyter Notebook - A web-based notebook environment for interactive computing" +optional = true +python-versions = ">=3.8" +files = [ + {file = "notebook-7.0.4-py3-none-any.whl", hash = "sha256:ee738414ac01773c1ad6834cf76cc6f1ce140ac8197fd13b3e2d44d89e257f72"}, + {file = "notebook-7.0.4.tar.gz", hash = "sha256:0c1b458f72ce8774445c8ef9ed2492bd0b9ce9605ac996e2b066114f69795e71"}, +] + +[package.dependencies] +jupyter-server = ">=2.4.0,<3" +jupyterlab = ">=4.0.2,<5" +jupyterlab-server = ">=2.22.1,<3" +notebook-shim = ">=0.2,<0.3" +tornado = ">=6.2.0" + +[package.extras] +dev = ["hatch", "pre-commit"] +docs = ["myst-parser", "nbsphinx", "pydata-sphinx-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt", "sphinxcontrib-spelling"] +test = ["importlib-resources (>=5.0)", "ipykernel", "jupyter-server[test] (>=2.4.0,<3)", "jupyterlab-server[test] (>=2.22.1,<3)", "nbval", "pytest (>=7.0)", "pytest-console-scripts", "pytest-timeout", "pytest-tornasync", "requests"] + +[[package]] +name = "notebook-shim" +version = "0.2.3" +description = "A shim layer for notebook traits and config" +optional = true +python-versions = ">=3.7" +files = [ + {file = "notebook_shim-0.2.3-py3-none-any.whl", hash = "sha256:a83496a43341c1674b093bfcebf0fe8e74cbe7eda5fd2bbc56f8e39e1486c0c7"}, + {file = "notebook_shim-0.2.3.tar.gz", hash = "sha256:f69388ac283ae008cd506dda10d0288b09a017d822d5e8c7129a152cbd3ce7e9"}, +] + +[package.dependencies] +jupyter-server = ">=1.8,<3" + +[package.extras] +test = ["pytest", "pytest-console-scripts", "pytest-jupyter", "pytest-tornasync"] + +[[package]] +name = "nox" +version = "2023.4.22" +description = "Flexible test automation." +optional = false +python-versions = ">=3.7" +files = [ + {file = "nox-2023.4.22-py3-none-any.whl", hash = "sha256:0b1adc619c58ab4fa57d6ab2e7823fe47a32e70202f287d78474adcc7bda1891"}, + {file = "nox-2023.4.22.tar.gz", hash = "sha256:46c0560b0dc609d7d967dc99e22cb463d3c4caf54a5fda735d6c11b5177e3a9f"}, +] + +[package.dependencies] +argcomplete = ">=1.9.4,<4.0" +colorlog = ">=2.6.1,<7.0.0" +packaging = ">=20.9" +virtualenv = ">=14" + +[package.extras] +tox-to-nox = ["jinja2", "tox (<4)"] + +[[package]] +name = "nox-poetry" +version = "1.0.3" +description = "nox-poetry" +optional = false +python-versions = ">=3.7,<4.0" +files = [ + {file = "nox_poetry-1.0.3-py3-none-any.whl", hash = "sha256:a2fffeb70ae81840479e68287afe1c772bf376f70f1e92f99832a20b3c64d064"}, + {file = "nox_poetry-1.0.3.tar.gz", hash = "sha256:dc7ecbbd812a333a0c0b558f57e5b37f7c12926cddbcecaf2264957fd373824e"}, +] + +[package.dependencies] +nox = ">=2020.8.22" +packaging = ">=20.9" +tomlkit = ">=0.7" + +[[package]] +name = "numpy" +version = "1.24.4" +description = "Fundamental package for array computing in Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "numpy-1.24.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c0bfb52d2169d58c1cdb8cc1f16989101639b34c7d3ce60ed70b19c63eba0b64"}, + {file = "numpy-1.24.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ed094d4f0c177b1b8e7aa9cba7d6ceed51c0e569a5318ac0ca9a090680a6a1b1"}, + {file = "numpy-1.24.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79fc682a374c4a8ed08b331bef9c5f582585d1048fa6d80bc6c35bc384eee9b4"}, + {file = "numpy-1.24.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ffe43c74893dbf38c2b0a1f5428760a1a9c98285553c89e12d70a96a7f3a4d6"}, + {file = "numpy-1.24.4-cp310-cp310-win32.whl", hash = "sha256:4c21decb6ea94057331e111a5bed9a79d335658c27ce2adb580fb4d54f2ad9bc"}, + {file = "numpy-1.24.4-cp310-cp310-win_amd64.whl", hash = "sha256:b4bea75e47d9586d31e892a7401f76e909712a0fd510f58f5337bea9572c571e"}, + {file = "numpy-1.24.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f136bab9c2cfd8da131132c2cf6cc27331dd6fae65f95f69dcd4ae3c3639c810"}, + {file = "numpy-1.24.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2926dac25b313635e4d6cf4dc4e51c8c0ebfed60b801c799ffc4c32bf3d1254"}, + {file = "numpy-1.24.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:222e40d0e2548690405b0b3c7b21d1169117391c2e82c378467ef9ab4c8f0da7"}, + {file = "numpy-1.24.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7215847ce88a85ce39baf9e89070cb860c98fdddacbaa6c0da3ffb31b3350bd5"}, + {file = "numpy-1.24.4-cp311-cp311-win32.whl", hash = "sha256:4979217d7de511a8d57f4b4b5b2b965f707768440c17cb70fbf254c4b225238d"}, + {file = "numpy-1.24.4-cp311-cp311-win_amd64.whl", hash = "sha256:b7b1fc9864d7d39e28f41d089bfd6353cb5f27ecd9905348c24187a768c79694"}, + {file = "numpy-1.24.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1452241c290f3e2a312c137a9999cdbf63f78864d63c79039bda65ee86943f61"}, + {file = "numpy-1.24.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:04640dab83f7c6c85abf9cd729c5b65f1ebd0ccf9de90b270cd61935eef0197f"}, + {file = "numpy-1.24.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5425b114831d1e77e4b5d812b69d11d962e104095a5b9c3b641a218abcc050e"}, + {file = "numpy-1.24.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd80e219fd4c71fc3699fc1dadac5dcf4fd882bfc6f7ec53d30fa197b8ee22dc"}, + {file = "numpy-1.24.4-cp38-cp38-win32.whl", hash = "sha256:4602244f345453db537be5314d3983dbf5834a9701b7723ec28923e2889e0bb2"}, + {file = "numpy-1.24.4-cp38-cp38-win_amd64.whl", hash = "sha256:692f2e0f55794943c5bfff12b3f56f99af76f902fc47487bdfe97856de51a706"}, + {file = "numpy-1.24.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2541312fbf09977f3b3ad449c4e5f4bb55d0dbf79226d7724211acc905049400"}, + {file = "numpy-1.24.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9667575fb6d13c95f1b36aca12c5ee3356bf001b714fc354eb5465ce1609e62f"}, + {file = "numpy-1.24.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3a86ed21e4f87050382c7bc96571755193c4c1392490744ac73d660e8f564a9"}, + {file = "numpy-1.24.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d11efb4dbecbdf22508d55e48d9c8384db795e1b7b51ea735289ff96613ff74d"}, + {file = "numpy-1.24.4-cp39-cp39-win32.whl", hash = "sha256:6620c0acd41dbcb368610bb2f4d83145674040025e5536954782467100aa8835"}, + {file = "numpy-1.24.4-cp39-cp39-win_amd64.whl", hash = "sha256:befe2bf740fd8373cf56149a5c23a0f601e82869598d41f8e188a0e9869926f8"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:31f13e25b4e304632a4619d0e0777662c2ffea99fcae2029556b17d8ff958aef"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95f7ac6540e95bc440ad77f56e520da5bf877f87dca58bd095288dce8940532a"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e98f220aa76ca2a977fe435f5b04d7b3470c0a2e6312907b37ba6068f26787f2"}, + {file = "numpy-1.24.4.tar.gz", hash = "sha256:80f5e3a4e498641401868df4208b74581206afbee7cf7b8329daae82676d9463"}, +] + +[[package]] +name = "nvidia-ml-py" +version = "12.535.108" +description = "Python Bindings for the NVIDIA Management Library" +optional = true +python-versions = "*" +files = [ + {file = "nvidia-ml-py-12.535.108.tar.gz", hash = "sha256:141fe818771a165fb93f75dbe7f01f767c3bafa7c13f6876f53583511b078ee1"}, + {file = "nvidia_ml_py-12.535.108-py3-none-any.whl", hash = "sha256:f4e260ad0adb06d7ca1ea5574862ed4ef70f0a17720836854594fe188a3acaf4"}, +] + +[[package]] +name = "overrides" +version = "7.4.0" +description = "A decorator to automatically detect mismatch when overriding a method." +optional = true +python-versions = ">=3.6" +files = [ + {file = "overrides-7.4.0-py3-none-any.whl", hash = "sha256:3ad24583f86d6d7a49049695efe9933e67ba62f0c7625d53c59fa832ce4b8b7d"}, + {file = "overrides-7.4.0.tar.gz", hash = "sha256:9502a3cca51f4fac40b5feca985b6703a5c1f6ad815588a7ca9e285b9dca6757"}, +] + +[[package]] +name = "packaging" +version = "23.1" +description = "Core utilities for Python packages" +optional = false +python-versions = ">=3.7" +files = [ + {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"}, + {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"}, +] + +[[package]] +name = "pandas" +version = "1.5.3" +description = "Powerful data structures for data analysis, time series, and statistics" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pandas-1.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3749077d86e3a2f0ed51367f30bf5b82e131cc0f14260c4d3e499186fccc4406"}, + {file = "pandas-1.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:972d8a45395f2a2d26733eb8d0f629b2f90bebe8e8eddbb8829b180c09639572"}, + {file = "pandas-1.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:50869a35cbb0f2e0cd5ec04b191e7b12ed688874bd05dd777c19b28cbea90996"}, + {file = "pandas-1.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c3ac844a0fe00bfaeb2c9b51ab1424e5c8744f89860b138434a363b1f620f354"}, + {file = "pandas-1.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a0a56cef15fd1586726dace5616db75ebcfec9179a3a55e78f72c5639fa2a23"}, + {file = "pandas-1.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:478ff646ca42b20376e4ed3fa2e8d7341e8a63105586efe54fa2508ee087f328"}, + {file = "pandas-1.5.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6973549c01ca91ec96199e940495219c887ea815b2083722821f1d7abfa2b4dc"}, + {file = "pandas-1.5.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c39a8da13cede5adcd3be1182883aea1c925476f4e84b2807a46e2775306305d"}, + {file = "pandas-1.5.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f76d097d12c82a535fda9dfe5e8dd4127952b45fea9b0276cb30cca5ea313fbc"}, + {file = "pandas-1.5.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e474390e60ed609cec869b0da796ad94f420bb057d86784191eefc62b65819ae"}, + {file = "pandas-1.5.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f2b952406a1588ad4cad5b3f55f520e82e902388a6d5a4a91baa8d38d23c7f6"}, + {file = "pandas-1.5.3-cp311-cp311-win_amd64.whl", hash = "sha256:bc4c368f42b551bf72fac35c5128963a171b40dce866fb066540eeaf46faa003"}, + {file = "pandas-1.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:14e45300521902689a81f3f41386dc86f19b8ba8dd5ac5a3c7010ef8d2932813"}, + {file = "pandas-1.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9842b6f4b8479e41968eced654487258ed81df7d1c9b7b870ceea24ed9459b31"}, + {file = "pandas-1.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:26d9c71772c7afb9d5046e6e9cf42d83dd147b5cf5bcb9d97252077118543792"}, + {file = "pandas-1.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5fbcb19d6fceb9e946b3e23258757c7b225ba450990d9ed63ccceeb8cae609f7"}, + {file = "pandas-1.5.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:565fa34a5434d38e9d250af3c12ff931abaf88050551d9fbcdfafca50d62babf"}, + {file = "pandas-1.5.3-cp38-cp38-win32.whl", hash = "sha256:87bd9c03da1ac870a6d2c8902a0e1fd4267ca00f13bc494c9e5a9020920e1d51"}, + {file = "pandas-1.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:41179ce559943d83a9b4bbacb736b04c928b095b5f25dd2b7389eda08f46f373"}, + {file = "pandas-1.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c74a62747864ed568f5a82a49a23a8d7fe171d0c69038b38cedf0976831296fa"}, + {file = "pandas-1.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c4c00e0b0597c8e4f59e8d461f797e5d70b4d025880516a8261b2817c47759ee"}, + {file = "pandas-1.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a50d9a4336a9621cab7b8eb3fb11adb82de58f9b91d84c2cd526576b881a0c5a"}, + {file = "pandas-1.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dd05f7783b3274aa206a1af06f0ceed3f9b412cf665b7247eacd83be41cf7bf0"}, + {file = "pandas-1.5.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f69c4029613de47816b1bb30ff5ac778686688751a5e9c99ad8c7031f6508e5"}, + {file = "pandas-1.5.3-cp39-cp39-win32.whl", hash = "sha256:7cec0bee9f294e5de5bbfc14d0573f65526071029d036b753ee6507d2a21480a"}, + {file = "pandas-1.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:dfd681c5dc216037e0b0a2c821f5ed99ba9f03ebcf119c7dac0e9a7b960b9ec9"}, + {file = "pandas-1.5.3.tar.gz", hash = "sha256:74a3fd7e5a7ec052f183273dc7b0acd3a863edf7520f5d3a1765c04ffdb3b0b1"}, +] + +[package.dependencies] +numpy = [ + {version = ">=1.20.3", markers = "python_version < \"3.10\""}, + {version = ">=1.23.2", markers = "python_version >= \"3.11\""}, + {version = ">=1.21.0", markers = "python_version >= \"3.10\" and python_version < \"3.11\""}, +] +python-dateutil = ">=2.8.1" +pytz = ">=2020.1" + +[package.extras] +test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"] + +[[package]] +name = "pandocfilters" +version = "1.5.0" +description = "Utilities for writing pandoc filters in python" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"}, + {file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"}, +] + +[[package]] +name = "parso" +version = "0.8.3" +description = "A Python Parser" +optional = false +python-versions = ">=3.6" +files = [ + {file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"}, + {file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"}, +] + +[package.extras] +qa = ["flake8 (==3.8.3)", "mypy (==0.782)"] +testing = ["docopt", "pytest (<6.0.0)"] + +[[package]] +name = "pbr" +version = "5.11.1" +description = "Python Build Reasonableness" +optional = false +python-versions = ">=2.6" +files = [ + {file = "pbr-5.11.1-py2.py3-none-any.whl", hash = "sha256:567f09558bae2b3ab53cb3c1e2e33e726ff3338e7bae3db5dc954b3a44eef12b"}, + {file = "pbr-5.11.1.tar.gz", hash = "sha256:aefc51675b0b533d56bb5fd1c8c6c0522fe31896679882e1c4c63d5e4a0fccb3"}, +] + +[[package]] +name = "pep440" +version = "0.1.2" +description = "A simple package with utils to check whether versions number match PEP 440." +optional = false +python-versions = ">=3.7" +files = [ + {file = "pep440-0.1.2-py3-none-any.whl", hash = "sha256:36d6ad73f2b5d07769294cafe183500ac89d848c922a3d3f521b968481880d51"}, + {file = "pep440-0.1.2.tar.gz", hash = "sha256:58b37246cc2b13fee1ca2a3c092cb3704d21ecf621a5bdbb168e44e697f6d04d"}, +] + +[package.extras] +lint = ["check-manifest", "mypy"] +test = ["pytest", "pytest-console-scripts", "pytest-cov"] + +[[package]] +name = "pexpect" +version = "4.8.0" +description = "Pexpect allows easy control of interactive console applications." +optional = false +python-versions = "*" +files = [ + {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"}, + {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"}, +] + +[package.dependencies] +ptyprocess = ">=0.5" + +[[package]] +name = "pickleshare" +version = "0.7.5" +description = "Tiny 'shelve'-like database with concurrency support" +optional = false +python-versions = "*" +files = [ + {file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"}, + {file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"}, +] + +[[package]] +name = "pkgutil-resolve-name" +version = "1.3.10" +description = "Resolve a name to an object." +optional = false +python-versions = ">=3.6" +files = [ + {file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"}, + {file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"}, +] + +[[package]] +name = "platformdirs" +version = "3.10.0" +description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +optional = false +python-versions = ">=3.7" +files = [ + {file = "platformdirs-3.10.0-py3-none-any.whl", hash = "sha256:d7c24979f292f916dc9cbf8648319032f551ea8c49a4c9bf2fb556a02070ec1d"}, + {file = "platformdirs-3.10.0.tar.gz", hash = "sha256:b45696dab2d7cc691a3226759c0d3b00c47c8b6e293d96f6436f733303f77f6d"}, +] + +[package.extras] +docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.1)", "sphinx-autodoc-typehints (>=1.24)"] +test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4)", "pytest-cov (>=4.1)", "pytest-mock (>=3.11.1)"] + +[[package]] +name = "pluggy" +version = "1.3.0" +description = "plugin and hook calling mechanisms for python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pluggy-1.3.0-py3-none-any.whl", hash = "sha256:d89c696a773f8bd377d18e5ecda92b7a3793cbe66c87060a6fb58c7b6e1061f7"}, + {file = "pluggy-1.3.0.tar.gz", hash = "sha256:cf61ae8f126ac6f7c451172cf30e3e43d3ca77615509771b3a984a0730651e12"}, +] + +[package.extras] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] + +[[package]] +name = "prometheus-client" +version = "0.17.1" +description = "Python client for the Prometheus monitoring system." +optional = true +python-versions = ">=3.6" +files = [ + {file = "prometheus_client-0.17.1-py3-none-any.whl", hash = "sha256:e537f37160f6807b8202a6fc4764cdd19bac5480ddd3e0d463c3002b34462101"}, + {file = "prometheus_client-0.17.1.tar.gz", hash = "sha256:21e674f39831ae3f8acde238afd9a27a37d0d2fb5a28ea094f0ce25d2cbf2091"}, +] + +[package.extras] +twisted = ["twisted"] + +[[package]] +name = "prompt-toolkit" +version = "3.0.39" +description = "Library for building powerful interactive command lines in Python" +optional = false +python-versions = ">=3.7.0" +files = [ + {file = "prompt_toolkit-3.0.39-py3-none-any.whl", hash = "sha256:9dffbe1d8acf91e3de75f3b544e4842382fc06c6babe903ac9acb74dc6e08d88"}, + {file = "prompt_toolkit-3.0.39.tar.gz", hash = "sha256:04505ade687dc26dc4284b1ad19a83be2f2afe83e7a828ace0c72f3a1df72aac"}, +] + +[package.dependencies] +wcwidth = "*" + +[[package]] +name = "psutil" +version = "5.9.5" +description = "Cross-platform lib for process and system monitoring in Python." +optional = true +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "psutil-5.9.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:be8929ce4313f9f8146caad4272f6abb8bf99fc6cf59344a3167ecd74f4f203f"}, + {file = "psutil-5.9.5-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ab8ed1a1d77c95453db1ae00a3f9c50227ebd955437bcf2a574ba8adbf6a74d5"}, + {file = "psutil-5.9.5-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:4aef137f3345082a3d3232187aeb4ac4ef959ba3d7c10c33dd73763fbc063da4"}, + {file = "psutil-5.9.5-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ea8518d152174e1249c4f2a1c89e3e6065941df2fa13a1ab45327716a23c2b48"}, + {file = "psutil-5.9.5-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:acf2aef9391710afded549ff602b5887d7a2349831ae4c26be7c807c0a39fac4"}, + {file = "psutil-5.9.5-cp27-none-win32.whl", hash = "sha256:5b9b8cb93f507e8dbaf22af6a2fd0ccbe8244bf30b1baad6b3954e935157ae3f"}, + {file = "psutil-5.9.5-cp27-none-win_amd64.whl", hash = "sha256:8c5f7c5a052d1d567db4ddd231a9d27a74e8e4a9c3f44b1032762bd7b9fdcd42"}, + {file = "psutil-5.9.5-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:3c6f686f4225553615612f6d9bc21f1c0e305f75d7d8454f9b46e901778e7217"}, + {file = "psutil-5.9.5-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a7dd9997128a0d928ed4fb2c2d57e5102bb6089027939f3b722f3a210f9a8da"}, + {file = "psutil-5.9.5-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89518112647f1276b03ca97b65cc7f64ca587b1eb0278383017c2a0dcc26cbe4"}, + {file = "psutil-5.9.5-cp36-abi3-win32.whl", hash = "sha256:104a5cc0e31baa2bcf67900be36acde157756b9c44017b86b2c049f11957887d"}, + {file = "psutil-5.9.5-cp36-abi3-win_amd64.whl", hash = "sha256:b258c0c1c9d145a1d5ceffab1134441c4c5113b2417fafff7315a917a026c3c9"}, + {file = "psutil-5.9.5-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:c607bb3b57dc779d55e1554846352b4e358c10fff3abf3514a7a6601beebdb30"}, + {file = "psutil-5.9.5.tar.gz", hash = "sha256:5410638e4df39c54d957fc51ce03048acd8e6d60abc0f5107af51e5fb566eb3c"}, +] + +[package.extras] +test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"] + +[[package]] +name = "ptyprocess" +version = "0.7.0" +description = "Run a subprocess in a pseudo terminal" +optional = false +python-versions = "*" +files = [ + {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, + {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, +] + +[[package]] +name = "pure-eval" +version = "0.2.2" +description = "Safely evaluate AST nodes without side effects" +optional = false +python-versions = "*" +files = [ + {file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"}, + {file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"}, +] + +[package.extras] +tests = ["pytest"] + +[[package]] +name = "pycparser" +version = "2.21" +description = "C parser in Python" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"}, + {file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"}, +] + +[[package]] +name = "pycuda" +version = "2022.2.2" +description = "Python wrapper for Nvidia CUDA" +optional = true +python-versions = "~=3.8" +files = [ + {file = "pycuda-2022.2.2.tar.gz", hash = "sha256:cd92e7246bb45ac3452955a110714112674cdf3b4a9e2f4ff25a4159c684e6bb"}, +] + +[package.dependencies] +appdirs = ">=1.4.0" +mako = "*" +pytools = ">=2011.2" + +[[package]] +name = "pygments" +version = "2.16.1" +description = "Pygments is a syntax highlighting package written in Python." +optional = false +python-versions = ">=3.7" +files = [ + {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"}, + {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"}, +] + +[package.extras] +plugins = ["importlib-metadata"] + +[[package]] +name = "pyhip-interface" +version = "0.1.2" +description = "Python Interface to HIP and hiprtc Library" +optional = true +python-versions = "*" +files = [ + {file = "pyhip-interface-0.1.2.tar.gz", hash = "sha256:0a19f4c2a6ae1ece88d537b8890523d149a12d676591b2ba073ff3ec9b11dfbb"}, +] + +[[package]] +name = "pynvml" +version = "11.5.0" +description = "Python Bindings for the NVIDIA Management Library" +optional = true +python-versions = ">=3.6" +files = [ + {file = "pynvml-11.5.0-py3-none-any.whl", hash = "sha256:5cce014ac01b098d08f06178f86c37be409b80b2e903a5a03ce15eed60f55e25"}, + {file = "pynvml-11.5.0.tar.gz", hash = "sha256:d027b21b95b1088b9fc278117f9f61b7c67f8e33a787e9f83f735f0f71ac32d0"}, +] + +[[package]] +name = "pyopencl" +version = "2023.1.2" +description = "Python wrapper for OpenCL" +optional = true +python-versions = "~=3.8" +files = [ + {file = "pyopencl-2023.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8c69c7595e1bab949e4702dfdaad9ae97005cc071dc42eb67c2b0c3aed1cbaac"}, + {file = "pyopencl-2023.1.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9cd95634f772c45b8cf48c2c3b1c5018e7e5aee35f7f3a3d3c514d92b19c13ae"}, + {file = "pyopencl-2023.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c34f4a912525819a2427bf0dca3b1b540b69321bd48c578c0fdfd0ff4ac98c94"}, + {file = "pyopencl-2023.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:be7fbeee10b6f5de54043efda430d6b940efcd3976c14beea7740c0b6d2679ea"}, + {file = "pyopencl-2023.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:ded99985c2169142f3ad6fd97ff2b173ab0755f8c83796d8182d89953a47622b"}, + {file = "pyopencl-2023.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:f36c9e4ca6f82f3d4d514ab256405595ded7a9d3a9615002ca270dd0e9690a04"}, + {file = "pyopencl-2023.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9d5633e9081d4a08f7b93a0e79eae003c5deb48d30504ce33304090050ba2d54"}, + {file = "pyopencl-2023.1.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:deee455f432a95b8c649e1bc8f5c8228aaa94d359aa31416830521c6d7a0a264"}, + {file = "pyopencl-2023.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8834cedaf9b0fbef1bb0bdbe24d24262c61c4ee23f4363bb1aef5dd0753152af"}, + {file = "pyopencl-2023.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:77196615fc3b0c69ee9468e5e8132b1e0e8b727d709e3a193fd1c7f8944f2d34"}, + {file = "pyopencl-2023.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6917dc9cf7420df5d52280e2c6d5d95f3008c5c2f1e13842b825bbf4bf2a462e"}, + {file = "pyopencl-2023.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:056e5753ce48e8f5e3ac421c845b068afa8f11877098beab5093ad48fe7f7e27"}, + {file = "pyopencl-2023.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d6e822f2f91b9c6876ed672139ba531c7d8e530d10dd19961b0c7335890bbd74"}, + {file = "pyopencl-2023.1.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:367fa10516ed80c308f02efd75ec5f60604543710ea06cb1c0e3553574513602"}, + {file = "pyopencl-2023.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9756e06ce74919679b7c4c4b9e58a9e57928d47d77112a5bd2598b840dd49ccb"}, + {file = "pyopencl-2023.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a09ad6fddef2cb8179ddfdee3a041141912c0eef9677440ac84dff8a06d0b1ea"}, + {file = "pyopencl-2023.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b96c23353214cd12761963d5c866f2ff617ff746653ea36060f3a911bf7a020c"}, + {file = "pyopencl-2023.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:503e5d4a3c6bf7c258e0619eb30835e0e7e42565237e2f6e21a68d34d9d350ac"}, + {file = "pyopencl-2023.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:458272e79c74c9d12bbb3ae2b9ab97f5109f60ab97f2e99c6e492007ce01b9ed"}, + {file = "pyopencl-2023.1.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bbd7a49ad050807ebbc1fa64815c9cf22fbd73b5f7638852fe68d9b9dde075be"}, + {file = "pyopencl-2023.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ea121d1307419c89ecb766e416228fc831b4dc1c936329d870e6e55cfd26f13"}, + {file = "pyopencl-2023.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a168d96b2e0c5ddb414a74d4d045b82b2f3d0630527a7922c03ac7739eb3632"}, + {file = "pyopencl-2023.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:3631eefd30966ad35b07b6ceeb2c59ac8cee2e823534b1d3fa97acbb91a41399"}, + {file = "pyopencl-2023.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:9f11ede4c039c8a472f23be6f8e62704986cba019e8a13a8a5af4d8b606a783c"}, + {file = "pyopencl-2023.1.2-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e01b88b873ae644e6edc05d4a6d935e9da371ae46b1a78c4022c23046401a4bb"}, + {file = "pyopencl-2023.1.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:311b14ac9def1538fe7d4b01dd00d6e6e8b296fdfe6a7bacef739552bd477653"}, + {file = "pyopencl-2023.1.2-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ffdb4cbcb795fca5a27be101f02e85aca24637471e9b53303f232e293646d72b"}, + {file = "pyopencl-2023.1.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b02ac162218cbabe5287244fb7cea906ca4d3d6716d8fc952526e1b55ad63d9"}, + {file = "pyopencl-2023.1.2-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d67b55c92c769da5b994eb2f56e712d48cfdf4b534f8df20c6ab74346129357d"}, + {file = "pyopencl-2023.1.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068cd66ea53c98c745614697ccafd147e10bffaa761feccd3b1e90fa7cdc483c"}, + {file = "pyopencl-2023.1.2.tar.gz", hash = "sha256:eb00cd574049d592b679dcf8bfe7ab4a36c94a39fd1acb1a6b45d6c0d7be9a68"}, +] + +[package.dependencies] +numpy = "*" +platformdirs = ">=2.2.0" +pytools = ">=2021.2.7" + +[package.extras] +oclgrind = ["oclgrind-binary-distribution (>=18.3)"] +pocl = ["pocl-binary-distribution (>=1.2)"] +test = ["Mako", "pytest (>=7.0.0)"] + +[[package]] +name = "pyparsing" +version = "3.1.1" +description = "pyparsing module - Classes and methods to define and execute parsing grammars" +optional = true +python-versions = ">=3.6.8" +files = [ + {file = "pyparsing-3.1.1-py3-none-any.whl", hash = "sha256:32c7c0b711493c72ff18a981d24f28aaf9c1fb7ed5e9667c9e84e3db623bdbfb"}, + {file = "pyparsing-3.1.1.tar.gz", hash = "sha256:ede28a1a32462f5a9705e07aea48001a08f7cf81a021585011deba701581a0db"}, +] + +[package.extras] +diagrams = ["jinja2", "railroad-diagrams"] + +[[package]] +name = "pytest" +version = "7.4.2" +description = "pytest: simple powerful testing with Python" +optional = false +python-versions = ">=3.7" +files = [ + {file = "pytest-7.4.2-py3-none-any.whl", hash = "sha256:1d881c6124e08ff0a1bb75ba3ec0bfd8b5354a01c194ddd5a0a870a48d99b002"}, + {file = "pytest-7.4.2.tar.gz", hash = "sha256:a766259cfab564a2ad52cb1aae1b881a75c3eb7e34ca3779697c23ed47c47069"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "sys_platform == \"win32\""} +exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=0.12,<2.0" +tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""} + +[package.extras] +testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] + +[[package]] +name = "pytest-cov" +version = "4.1.0" +description = "Pytest plugin for measuring coverage." +optional = false +python-versions = ">=3.7" +files = [ + {file = "pytest-cov-4.1.0.tar.gz", hash = "sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6"}, + {file = "pytest_cov-4.1.0-py3-none-any.whl", hash = "sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a"}, +] + +[package.dependencies] +coverage = {version = ">=5.2.1", extras = ["toml"]} +pytest = ">=4.6" + +[package.extras] +testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"] + +[[package]] +name = "python-constraint2" +version = "2.0.0b3" +description = "python-constraint is a module for efficiently solving CSPs (Constraint Solving Problems) over finite domains." +optional = false +python-versions = ">=3.8,<3.12" +files = [ + {file = "python_constraint2-2.0.0b3-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:9098abc3cc5216e1b9b893811542d2775ddec6b12981c2af95c2b729c872440b"}, + {file = "python_constraint2-2.0.0b3-cp310-cp310-manylinux_2_35_x86_64.whl", hash = "sha256:9b21181000f6fc0bcdf459ef1571332170c4fab8911887df50328b72eecf0152"}, + {file = "python_constraint2-2.0.0b3-cp310-cp310-win_amd64.whl", hash = "sha256:f4a6f1a4521af1853345702be33ae04fac8539571cafd6ba984da775d0d997ed"}, + {file = "python_constraint2-2.0.0b3-cp311-cp311-macosx_12_0_x86_64.whl", hash = "sha256:345f9217dced0412d523a573900a21d0d0c3d70fa2a179a8442f464bebf61347"}, + {file = "python_constraint2-2.0.0b3-cp311-cp311-manylinux_2_35_x86_64.whl", hash = "sha256:01fe54bafe7d1ef8db2408d1c74f8ca7f30feb38df14b98a5003e00379ff1741"}, + {file = "python_constraint2-2.0.0b3-cp311-cp311-win_amd64.whl", hash = "sha256:2933cfdf4f6ad8e9de99dd196df40f8d6a1740d49104d379195f71ab07ecde45"}, + {file = "python_constraint2-2.0.0b3-cp38-cp38-macosx_12_0_x86_64.whl", hash = "sha256:459e57d12bfd436af551099b37ffb90fc6f40e71f7d915fec551eaf46808a491"}, + {file = "python_constraint2-2.0.0b3-cp38-cp38-manylinux_2_35_x86_64.whl", hash = "sha256:0d829bc47d0c7921791293e18aec35f5d4021ad6c4127f79901890f9263b8ea2"}, + {file = "python_constraint2-2.0.0b3-cp38-cp38-win_amd64.whl", hash = "sha256:d4db079c52b4307c35a58681a74fa58a27728c5adbcf5ce0cefafeade85a09a3"}, + {file = "python_constraint2-2.0.0b3-cp39-cp39-macosx_12_0_x86_64.whl", hash = "sha256:5b8542de5420282690c94965b49f016981a72b12bec508557a927839af9007e4"}, + {file = "python_constraint2-2.0.0b3-cp39-cp39-manylinux_2_35_x86_64.whl", hash = "sha256:129841a58bed0f20be48c158a7063a455984916dcd43581d214388781179dca0"}, + {file = "python_constraint2-2.0.0b3-cp39-cp39-win_amd64.whl", hash = "sha256:864705d2896a5051ffee6185750f8530957d839e263dba71766cdb5df0d5a337"}, + {file = "python_constraint2-2.0.0b3.tar.gz", hash = "sha256:ab80ef97b96ff76ee71d965f130a427a89e0f80a27c09c0b76686a028fffb4e9"}, +] + +[[package]] +name = "python-dateutil" +version = "2.8.2" +description = "Extensions to the standard Python datetime module" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" +files = [ + {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"}, + {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"}, +] + +[package.dependencies] +six = ">=1.5" + +[[package]] +name = "python-json-logger" +version = "2.0.7" +description = "A python library adding a json log formatter" +optional = true +python-versions = ">=3.6" +files = [ + {file = "python-json-logger-2.0.7.tar.gz", hash = "sha256:23e7ec02d34237c5aa1e29a070193a4ea87583bb4e7f8fd06d3de8264c4b2e1c"}, + {file = "python_json_logger-2.0.7-py3-none-any.whl", hash = "sha256:f380b826a991ebbe3de4d897aeec42760035ac760345e57b812938dc8b35e2bd"}, +] + +[[package]] +name = "pytools" +version = "2023.1.1" +description = "A collection of tools for Python" +optional = true +python-versions = "~=3.8" +files = [ + {file = "pytools-2023.1.1-py2.py3-none-any.whl", hash = "sha256:53b98e5d6c01a90e343f8be2f5271e94204a210ef3e74fbefa3d47ec7480f150"}, + {file = "pytools-2023.1.1.tar.gz", hash = "sha256:80637873d206f6bcedf7cdb46ad93e868acb4ea2256db052dfcca872bdd0321f"}, +] + +[package.dependencies] +platformdirs = ">=2.2.0" +typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""} + +[package.extras] +numpy = ["numpy (>=1.6.0)"] + +[[package]] +name = "pytz" +version = "2023.3.post1" +description = "World timezone definitions, modern and historical" +optional = false +python-versions = "*" +files = [ + {file = "pytz-2023.3.post1-py2.py3-none-any.whl", hash = "sha256:ce42d816b81b68506614c11e8937d3aa9e41007ceb50bfdcb0749b921bf646c7"}, + {file = "pytz-2023.3.post1.tar.gz", hash = "sha256:7b4fddbeb94a1eba4b557da24f19fdf9db575192544270a9101d8509f9f43d7b"}, +] + +[[package]] +name = "pywin32" +version = "306" +description = "Python for Window Extensions" +optional = false +python-versions = "*" +files = [ + {file = "pywin32-306-cp310-cp310-win32.whl", hash = "sha256:06d3420a5155ba65f0b72f2699b5bacf3109f36acbe8923765c22938a69dfc8d"}, + {file = "pywin32-306-cp310-cp310-win_amd64.whl", hash = "sha256:84f4471dbca1887ea3803d8848a1616429ac94a4a8d05f4bc9c5dcfd42ca99c8"}, + {file = "pywin32-306-cp311-cp311-win32.whl", hash = "sha256:e65028133d15b64d2ed8f06dd9fbc268352478d4f9289e69c190ecd6818b6407"}, + {file = "pywin32-306-cp311-cp311-win_amd64.whl", hash = "sha256:a7639f51c184c0272e93f244eb24dafca9b1855707d94c192d4a0b4c01e1100e"}, + {file = "pywin32-306-cp311-cp311-win_arm64.whl", hash = "sha256:70dba0c913d19f942a2db25217d9a1b726c278f483a919f1abfed79c9cf64d3a"}, + {file = "pywin32-306-cp312-cp312-win32.whl", hash = "sha256:383229d515657f4e3ed1343da8be101000562bf514591ff383ae940cad65458b"}, + {file = "pywin32-306-cp312-cp312-win_amd64.whl", hash = "sha256:37257794c1ad39ee9be652da0462dc2e394c8159dfd913a8a4e8eb6fd346da0e"}, + {file = "pywin32-306-cp312-cp312-win_arm64.whl", hash = "sha256:5821ec52f6d321aa59e2db7e0a35b997de60c201943557d108af9d4ae1ec7040"}, + {file = "pywin32-306-cp37-cp37m-win32.whl", hash = "sha256:1c73ea9a0d2283d889001998059f5eaaba3b6238f767c9cf2833b13e6a685f65"}, + {file = "pywin32-306-cp37-cp37m-win_amd64.whl", hash = "sha256:72c5f621542d7bdd4fdb716227be0dd3f8565c11b280be6315b06ace35487d36"}, + {file = "pywin32-306-cp38-cp38-win32.whl", hash = "sha256:e4c092e2589b5cf0d365849e73e02c391c1349958c5ac3e9d5ccb9a28e017b3a"}, + {file = "pywin32-306-cp38-cp38-win_amd64.whl", hash = "sha256:e8ac1ae3601bee6ca9f7cb4b5363bf1c0badb935ef243c4733ff9a393b1690c0"}, + {file = "pywin32-306-cp39-cp39-win32.whl", hash = "sha256:e25fd5b485b55ac9c057f67d94bc203f3f6595078d1fb3b458c9c28b7153a802"}, + {file = "pywin32-306-cp39-cp39-win_amd64.whl", hash = "sha256:39b61c15272833b5c329a2989999dcae836b1eed650252ab1b7bfbe1d59f30f4"}, +] + +[[package]] +name = "pywinpty" +version = "2.0.11" +description = "Pseudo terminal support for Windows from Python." +optional = true +python-versions = ">=3.8" +files = [ + {file = "pywinpty-2.0.11-cp310-none-win_amd64.whl", hash = "sha256:452f10ac9ff8ab9151aa8cea9e491a9612a12250b1899278c6a56bc184afb47f"}, + {file = "pywinpty-2.0.11-cp311-none-win_amd64.whl", hash = "sha256:6701867d42aec1239bc0fedf49a336570eb60eb886e81763db77ea2b6c533cc3"}, + {file = "pywinpty-2.0.11-cp38-none-win_amd64.whl", hash = "sha256:0ffd287751ad871141dc9724de70ea21f7fc2ff1af50861e0d232cf70739d8c4"}, + {file = "pywinpty-2.0.11-cp39-none-win_amd64.whl", hash = "sha256:e4e7f023c28ca7aa8e1313e53ba80a4d10171fe27857b7e02f99882dfe3e8638"}, + {file = "pywinpty-2.0.11.tar.gz", hash = "sha256:e244cffe29a894876e2cd251306efd0d8d64abd5ada0a46150a4a71c0b9ad5c5"}, +] + +[[package]] +name = "pyyaml" +version = "6.0.1" +description = "YAML parser and emitter for Python" +optional = true +python-versions = ">=3.6" +files = [ + {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"}, + {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, + {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"}, + {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, + {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, + {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, + {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, + {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"}, + {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, + {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"}, + {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"}, + {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"}, + {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"}, + {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"}, + {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"}, + {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"}, + {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"}, + {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"}, + {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"}, + {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"}, + {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, + {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"}, + {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, + {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, + {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, + {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, + {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"}, + {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, + {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, + {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, +] + +[[package]] +name = "pyzmq" +version = "25.1.1" +description = "Python bindings for 0MQ" +optional = false +python-versions = ">=3.6" +files = [ + {file = "pyzmq-25.1.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:381469297409c5adf9a0e884c5eb5186ed33137badcbbb0560b86e910a2f1e76"}, + {file = "pyzmq-25.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:955215ed0604dac5b01907424dfa28b40f2b2292d6493445dd34d0dfa72586a8"}, + {file = "pyzmq-25.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:985bbb1316192b98f32e25e7b9958088431d853ac63aca1d2c236f40afb17c83"}, + {file = "pyzmq-25.1.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:afea96f64efa98df4da6958bae37f1cbea7932c35878b185e5982821bc883369"}, + {file = "pyzmq-25.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:76705c9325d72a81155bb6ab48d4312e0032bf045fb0754889133200f7a0d849"}, + {file = "pyzmq-25.1.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:77a41c26205d2353a4c94d02be51d6cbdf63c06fbc1295ea57dad7e2d3381b71"}, + {file = "pyzmq-25.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:12720a53e61c3b99d87262294e2b375c915fea93c31fc2336898c26d7aed34cd"}, + {file = "pyzmq-25.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:57459b68e5cd85b0be8184382cefd91959cafe79ae019e6b1ae6e2ba8a12cda7"}, + {file = "pyzmq-25.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:292fe3fc5ad4a75bc8df0dfaee7d0babe8b1f4ceb596437213821f761b4589f9"}, + {file = "pyzmq-25.1.1-cp310-cp310-win32.whl", hash = "sha256:35b5ab8c28978fbbb86ea54958cd89f5176ce747c1fb3d87356cf698048a7790"}, + {file = "pyzmq-25.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:11baebdd5fc5b475d484195e49bae2dc64b94a5208f7c89954e9e354fc609d8f"}, + {file = "pyzmq-25.1.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:d20a0ddb3e989e8807d83225a27e5c2eb2260eaa851532086e9e0fa0d5287d83"}, + {file = "pyzmq-25.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:e1c1be77bc5fb77d923850f82e55a928f8638f64a61f00ff18a67c7404faf008"}, + {file = "pyzmq-25.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d89528b4943d27029a2818f847c10c2cecc79fa9590f3cb1860459a5be7933eb"}, + {file = "pyzmq-25.1.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:90f26dc6d5f241ba358bef79be9ce06de58d477ca8485e3291675436d3827cf8"}, + {file = "pyzmq-25.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c2b92812bd214018e50b6380ea3ac0c8bb01ac07fcc14c5f86a5bb25e74026e9"}, + {file = "pyzmq-25.1.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:2f957ce63d13c28730f7fd6b72333814221c84ca2421298f66e5143f81c9f91f"}, + {file = "pyzmq-25.1.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:047a640f5c9c6ade7b1cc6680a0e28c9dd5a0825135acbd3569cc96ea00b2505"}, + {file = "pyzmq-25.1.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:7f7e58effd14b641c5e4dec8c7dab02fb67a13df90329e61c869b9cc607ef752"}, + {file = "pyzmq-25.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c2910967e6ab16bf6fbeb1f771c89a7050947221ae12a5b0b60f3bca2ee19bca"}, + {file = "pyzmq-25.1.1-cp311-cp311-win32.whl", hash = "sha256:76c1c8efb3ca3a1818b837aea423ff8a07bbf7aafe9f2f6582b61a0458b1a329"}, + {file = "pyzmq-25.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:44e58a0554b21fc662f2712814a746635ed668d0fbc98b7cb9d74cb798d202e6"}, + {file = "pyzmq-25.1.1-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:e1ffa1c924e8c72778b9ccd386a7067cddf626884fd8277f503c48bb5f51c762"}, + {file = "pyzmq-25.1.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:1af379b33ef33757224da93e9da62e6471cf4a66d10078cf32bae8127d3d0d4a"}, + {file = "pyzmq-25.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cff084c6933680d1f8b2f3b4ff5bbb88538a4aac00d199ac13f49d0698727ecb"}, + {file = "pyzmq-25.1.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e2400a94f7dd9cb20cd012951a0cbf8249e3d554c63a9c0cdfd5cbb6c01d2dec"}, + {file = "pyzmq-25.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d81f1ddae3858b8299d1da72dd7d19dd36aab654c19671aa8a7e7fb02f6638a"}, + {file = "pyzmq-25.1.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:255ca2b219f9e5a3a9ef3081512e1358bd4760ce77828e1028b818ff5610b87b"}, + {file = "pyzmq-25.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:a882ac0a351288dd18ecae3326b8a49d10c61a68b01419f3a0b9a306190baf69"}, + {file = "pyzmq-25.1.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:724c292bb26365659fc434e9567b3f1adbdb5e8d640c936ed901f49e03e5d32e"}, + {file = "pyzmq-25.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ca1ed0bb2d850aa8471387882247c68f1e62a4af0ce9c8a1dbe0d2bf69e41fb"}, + {file = "pyzmq-25.1.1-cp312-cp312-win32.whl", hash = "sha256:b3451108ab861040754fa5208bca4a5496c65875710f76789a9ad27c801a0075"}, + {file = "pyzmq-25.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:eadbefd5e92ef8a345f0525b5cfd01cf4e4cc651a2cffb8f23c0dd184975d787"}, + {file = "pyzmq-25.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:db0b2af416ba735c6304c47f75d348f498b92952f5e3e8bff449336d2728795d"}, + {file = "pyzmq-25.1.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c7c133e93b405eb0d36fa430c94185bdd13c36204a8635470cccc200723c13bb"}, + {file = "pyzmq-25.1.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:273bc3959bcbff3f48606b28229b4721716598d76b5aaea2b4a9d0ab454ec062"}, + {file = "pyzmq-25.1.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cbc8df5c6a88ba5ae385d8930da02201165408dde8d8322072e3e5ddd4f68e22"}, + {file = "pyzmq-25.1.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:18d43df3f2302d836f2a56f17e5663e398416e9dd74b205b179065e61f1a6edf"}, + {file = "pyzmq-25.1.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:73461eed88a88c866656e08f89299720a38cb4e9d34ae6bf5df6f71102570f2e"}, + {file = "pyzmq-25.1.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:34c850ce7976d19ebe7b9d4b9bb8c9dfc7aac336c0958e2651b88cbd46682123"}, + {file = "pyzmq-25.1.1-cp36-cp36m-win32.whl", hash = "sha256:d2045d6d9439a0078f2a34b57c7b18c4a6aef0bee37f22e4ec9f32456c852c71"}, + {file = "pyzmq-25.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:458dea649f2f02a0b244ae6aef8dc29325a2810aa26b07af8374dc2a9faf57e3"}, + {file = "pyzmq-25.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7cff25c5b315e63b07a36f0c2bab32c58eafbe57d0dce61b614ef4c76058c115"}, + {file = "pyzmq-25.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1579413ae492b05de5a6174574f8c44c2b9b122a42015c5292afa4be2507f28"}, + {file = "pyzmq-25.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3d0a409d3b28607cc427aa5c30a6f1e4452cc44e311f843e05edb28ab5e36da0"}, + {file = "pyzmq-25.1.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21eb4e609a154a57c520e3d5bfa0d97e49b6872ea057b7c85257b11e78068222"}, + {file = "pyzmq-25.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:034239843541ef7a1aee0c7b2cb7f6aafffb005ede965ae9cbd49d5ff4ff73cf"}, + {file = "pyzmq-25.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f8115e303280ba09f3898194791a153862cbf9eef722ad8f7f741987ee2a97c7"}, + {file = "pyzmq-25.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:1a5d26fe8f32f137e784f768143728438877d69a586ddeaad898558dc971a5ae"}, + {file = "pyzmq-25.1.1-cp37-cp37m-win32.whl", hash = "sha256:f32260e556a983bc5c7ed588d04c942c9a8f9c2e99213fec11a031e316874c7e"}, + {file = "pyzmq-25.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abf34e43c531bbb510ae7e8f5b2b1f2a8ab93219510e2b287a944432fad135f3"}, + {file = "pyzmq-25.1.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:87e34f31ca8f168c56d6fbf99692cc8d3b445abb5bfd08c229ae992d7547a92a"}, + {file = "pyzmq-25.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c9c6c9b2c2f80747a98f34ef491c4d7b1a8d4853937bb1492774992a120f475d"}, + {file = "pyzmq-25.1.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5619f3f5a4db5dbb572b095ea3cb5cc035335159d9da950830c9c4db2fbb6995"}, + {file = "pyzmq-25.1.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5a34d2395073ef862b4032343cf0c32a712f3ab49d7ec4f42c9661e0294d106f"}, + {file = "pyzmq-25.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25f0e6b78220aba09815cd1f3a32b9c7cb3e02cb846d1cfc526b6595f6046618"}, + {file = "pyzmq-25.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:3669cf8ee3520c2f13b2e0351c41fea919852b220988d2049249db10046a7afb"}, + {file = "pyzmq-25.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:2d163a18819277e49911f7461567bda923461c50b19d169a062536fffe7cd9d2"}, + {file = "pyzmq-25.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:df27ffddff4190667d40de7beba4a950b5ce78fe28a7dcc41d6f8a700a80a3c0"}, + {file = "pyzmq-25.1.1-cp38-cp38-win32.whl", hash = "sha256:a382372898a07479bd34bda781008e4a954ed8750f17891e794521c3e21c2e1c"}, + {file = "pyzmq-25.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:52533489f28d62eb1258a965f2aba28a82aa747202c8fa5a1c7a43b5db0e85c1"}, + {file = "pyzmq-25.1.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:03b3f49b57264909aacd0741892f2aecf2f51fb053e7d8ac6767f6c700832f45"}, + {file = "pyzmq-25.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:330f9e188d0d89080cde66dc7470f57d1926ff2fb5576227f14d5be7ab30b9fa"}, + {file = "pyzmq-25.1.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2ca57a5be0389f2a65e6d3bb2962a971688cbdd30b4c0bd188c99e39c234f414"}, + {file = "pyzmq-25.1.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d457aed310f2670f59cc5b57dcfced452aeeed77f9da2b9763616bd57e4dbaae"}, + {file = "pyzmq-25.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c56d748ea50215abef7030c72b60dd723ed5b5c7e65e7bc2504e77843631c1a6"}, + {file = "pyzmq-25.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8f03d3f0d01cb5a018debeb412441996a517b11c5c17ab2001aa0597c6d6882c"}, + {file = "pyzmq-25.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:820c4a08195a681252f46926de10e29b6bbf3e17b30037bd4250d72dd3ddaab8"}, + {file = "pyzmq-25.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:17ef5f01d25b67ca8f98120d5fa1d21efe9611604e8eb03a5147360f517dd1e2"}, + {file = "pyzmq-25.1.1-cp39-cp39-win32.whl", hash = "sha256:04ccbed567171579ec2cebb9c8a3e30801723c575601f9a990ab25bcac6b51e2"}, + {file = "pyzmq-25.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:e61f091c3ba0c3578411ef505992d356a812fb200643eab27f4f70eed34a29ef"}, + {file = "pyzmq-25.1.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ade6d25bb29c4555d718ac6d1443a7386595528c33d6b133b258f65f963bb0f6"}, + {file = "pyzmq-25.1.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0c95ddd4f6e9fca4e9e3afaa4f9df8552f0ba5d1004e89ef0a68e1f1f9807c7"}, + {file = "pyzmq-25.1.1-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:48e466162a24daf86f6b5ca72444d2bf39a5e58da5f96370078be67c67adc978"}, + {file = "pyzmq-25.1.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:abc719161780932c4e11aaebb203be3d6acc6b38d2f26c0f523b5b59d2fc1996"}, + {file = "pyzmq-25.1.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:1ccf825981640b8c34ae54231b7ed00271822ea1c6d8ba1090ebd4943759abf5"}, + {file = "pyzmq-25.1.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c2f20ce161ebdb0091a10c9ca0372e023ce24980d0e1f810f519da6f79c60800"}, + {file = "pyzmq-25.1.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:deee9ca4727f53464daf089536e68b13e6104e84a37820a88b0a057b97bba2d2"}, + {file = "pyzmq-25.1.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:aa8d6cdc8b8aa19ceb319aaa2b660cdaccc533ec477eeb1309e2a291eaacc43a"}, + {file = "pyzmq-25.1.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:019e59ef5c5256a2c7378f2fb8560fc2a9ff1d315755204295b2eab96b254d0a"}, + {file = "pyzmq-25.1.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:b9af3757495c1ee3b5c4e945c1df7be95562277c6e5bccc20a39aec50f826cd0"}, + {file = "pyzmq-25.1.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:548d6482dc8aadbe7e79d1b5806585c8120bafa1ef841167bc9090522b610fa6"}, + {file = "pyzmq-25.1.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:057e824b2aae50accc0f9a0570998adc021b372478a921506fddd6c02e60308e"}, + {file = "pyzmq-25.1.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2243700cc5548cff20963f0ca92d3e5e436394375ab8a354bbea2b12911b20b0"}, + {file = "pyzmq-25.1.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79986f3b4af059777111409ee517da24a529bdbd46da578b33f25580adcff728"}, + {file = "pyzmq-25.1.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:11d58723d44d6ed4dd677c5615b2ffb19d5c426636345567d6af82be4dff8a55"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:49d238cf4b69652257db66d0c623cd3e09b5d2e9576b56bc067a396133a00d4a"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fedbdc753827cf014c01dbbee9c3be17e5a208dcd1bf8641ce2cd29580d1f0d4"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bc16ac425cc927d0a57d242589f87ee093884ea4804c05a13834d07c20db203c"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11c1d2aed9079c6b0c9550a7257a836b4a637feb334904610f06d70eb44c56d2"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e8a701123029cc240cea61dd2d16ad57cab4691804143ce80ecd9286b464d180"}, + {file = "pyzmq-25.1.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:61706a6b6c24bdece85ff177fec393545a3191eeda35b07aaa1458a027ad1304"}, + {file = "pyzmq-25.1.1.tar.gz", hash = "sha256:259c22485b71abacdfa8bf79720cd7bcf4b9d128b30ea554f01ae71fdbfdaa23"}, +] + +[package.dependencies] +cffi = {version = "*", markers = "implementation_name == \"pypy\""} + +[[package]] +name = "qtconsole" +version = "5.4.4" +description = "Jupyter Qt console" +optional = true +python-versions = ">= 3.7" +files = [ + {file = "qtconsole-5.4.4-py3-none-any.whl", hash = "sha256:a3b69b868e041c2c698bdc75b0602f42e130ffb256d6efa48f9aa756c97672aa"}, + {file = "qtconsole-5.4.4.tar.gz", hash = "sha256:b7ffb53d74f23cee29f4cdb55dd6fabc8ec312d94f3c46ba38e1dde458693dfb"}, +] + +[package.dependencies] +ipykernel = ">=4.1" +ipython-genutils = "*" +jupyter-client = ">=4.1" +jupyter-core = "*" +packaging = "*" +pygments = "*" +pyzmq = ">=17.1" +qtpy = ">=2.4.0" +traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2" + +[package.extras] +doc = ["Sphinx (>=1.3)"] +test = ["flaky", "pytest", "pytest-qt"] + +[[package]] +name = "qtpy" +version = "2.4.0" +description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)." +optional = true +python-versions = ">=3.7" +files = [ + {file = "QtPy-2.4.0-py3-none-any.whl", hash = "sha256:4d4f045a41e09ac9fa57fcb47ef05781aa5af294a0a646acc1b729d14225e741"}, + {file = "QtPy-2.4.0.tar.gz", hash = "sha256:db2d508167aa6106781565c8da5c6f1487debacba33519cedc35fa8997d424d4"}, +] + +[package.dependencies] +packaging = "*" + +[package.extras] +test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"] + +[[package]] +name = "referencing" +version = "0.30.2" +description = "JSON Referencing + Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "referencing-0.30.2-py3-none-any.whl", hash = "sha256:449b6669b6121a9e96a7f9e410b245d471e8d48964c67113ce9afe50c8dd7bdf"}, + {file = "referencing-0.30.2.tar.gz", hash = "sha256:794ad8003c65938edcdbc027f1933215e0d0ccc0291e3ce20a4d87432b59efc0"}, +] + +[package.dependencies] +attrs = ">=22.2.0" +rpds-py = ">=0.7.0" + +[[package]] +name = "requests" +version = "2.31.0" +description = "Python HTTP for Humans." +optional = false +python-versions = ">=3.7" +files = [ + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, +] + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = ">=2,<4" +idna = ">=2.5,<4" +urllib3 = ">=1.21.1,<3" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)"] +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] + +[[package]] +name = "rfc3339-validator" +version = "0.1.4" +description = "A pure python RFC3339 validator" +optional = true +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +files = [ + {file = "rfc3339_validator-0.1.4-py2.py3-none-any.whl", hash = "sha256:24f6ec1eda14ef823da9e36ec7113124b39c04d50a4d3d3a3c2859577e7791fa"}, + {file = "rfc3339_validator-0.1.4.tar.gz", hash = "sha256:138a2abdf93304ad60530167e51d2dfb9549521a836871b88d7f4695d0022f6b"}, +] + +[package.dependencies] +six = "*" + +[[package]] +name = "rfc3986-validator" +version = "0.1.1" +description = "Pure python rfc3986 validator" +optional = true +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +files = [ + {file = "rfc3986_validator-0.1.1-py2.py3-none-any.whl", hash = "sha256:2f235c432ef459970b4306369336b9d5dbdda31b510ca1e327636e01f528bfa9"}, + {file = "rfc3986_validator-0.1.1.tar.gz", hash = "sha256:3d44bde7921b3b9ec3ae4e3adca370438eccebc676456449b145d533b240d055"}, +] + +[[package]] +name = "rpds-py" +version = "0.10.3" +description = "Python bindings to Rust's persistent data structures (rpds)" +optional = false +python-versions = ">=3.8" +files = [ + {file = "rpds_py-0.10.3-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:485747ee62da83366a44fbba963c5fe017860ad408ccd6cd99aa66ea80d32b2e"}, + {file = "rpds_py-0.10.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c55f9821f88e8bee4b7a72c82cfb5ecd22b6aad04033334f33c329b29bfa4da0"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3b52a67ac66a3a64a7e710ba629f62d1e26ca0504c29ee8cbd99b97df7079a8"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3aed39db2f0ace76faa94f465d4234aac72e2f32b009f15da6492a561b3bbebd"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:271c360fdc464fe6a75f13ea0c08ddf71a321f4c55fc20a3fe62ea3ef09df7d9"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ef5fddfb264e89c435be4adb3953cef5d2936fdeb4463b4161a6ba2f22e7b740"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a771417c9c06c56c9d53d11a5b084d1de75de82978e23c544270ab25e7c066ff"}, + {file = "rpds_py-0.10.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:52b5cbc0469328e58180021138207e6ec91d7ca2e037d3549cc9e34e2187330a"}, + {file = "rpds_py-0.10.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6ac3fefb0d168c7c6cab24fdfc80ec62cd2b4dfd9e65b84bdceb1cb01d385c33"}, + {file = "rpds_py-0.10.3-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:8d54bbdf5d56e2c8cf81a1857250f3ea132de77af543d0ba5dce667183b61fec"}, + {file = "rpds_py-0.10.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cd2163f42868865597d89399a01aa33b7594ce8e2c4a28503127c81a2f17784e"}, + {file = "rpds_py-0.10.3-cp310-none-win32.whl", hash = "sha256:ea93163472db26ac6043e8f7f93a05d9b59e0505c760da2a3cd22c7dd7111391"}, + {file = "rpds_py-0.10.3-cp310-none-win_amd64.whl", hash = "sha256:7cd020b1fb41e3ab7716d4d2c3972d4588fdfbab9bfbbb64acc7078eccef8860"}, + {file = "rpds_py-0.10.3-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:1d9b5ee46dcb498fa3e46d4dfabcb531e1f2e76b477e0d99ef114f17bbd38453"}, + {file = "rpds_py-0.10.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:563646d74a4b4456d0cf3b714ca522e725243c603e8254ad85c3b59b7c0c4bf0"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e626b864725680cd3904414d72e7b0bd81c0e5b2b53a5b30b4273034253bb41f"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:485301ee56ce87a51ccb182a4b180d852c5cb2b3cb3a82f7d4714b4141119d8c"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:42f712b4668831c0cd85e0a5b5a308700fe068e37dcd24c0062904c4e372b093"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6c9141af27a4e5819d74d67d227d5047a20fa3c7d4d9df43037a955b4c748ec5"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef750a20de1b65657a1425f77c525b0183eac63fe7b8f5ac0dd16f3668d3e64f"}, + {file = "rpds_py-0.10.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1a0ffc39f51aa5f5c22114a8f1906b3c17eba68c5babb86c5f77d8b1bba14d1"}, + {file = "rpds_py-0.10.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f4c179a7aeae10ddf44c6bac87938134c1379c49c884529f090f9bf05566c836"}, + {file = "rpds_py-0.10.3-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:176287bb998fd1e9846a9b666e240e58f8d3373e3bf87e7642f15af5405187b8"}, + {file = "rpds_py-0.10.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6446002739ca29249f0beaaf067fcbc2b5aab4bc7ee8fb941bd194947ce19aff"}, + {file = "rpds_py-0.10.3-cp311-none-win32.whl", hash = "sha256:c7aed97f2e676561416c927b063802c8a6285e9b55e1b83213dfd99a8f4f9e48"}, + {file = "rpds_py-0.10.3-cp311-none-win_amd64.whl", hash = "sha256:8bd01ff4032abaed03f2db702fa9a61078bee37add0bd884a6190b05e63b028c"}, + {file = "rpds_py-0.10.3-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:4cf0855a842c5b5c391dd32ca273b09e86abf8367572073bd1edfc52bc44446b"}, + {file = "rpds_py-0.10.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:69b857a7d8bd4f5d6e0db4086da8c46309a26e8cefdfc778c0c5cc17d4b11e08"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:975382d9aa90dc59253d6a83a5ca72e07f4ada3ae3d6c0575ced513db322b8ec"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:35fbd23c1c8732cde7a94abe7fb071ec173c2f58c0bd0d7e5b669fdfc80a2c7b"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:106af1653007cc569d5fbb5f08c6648a49fe4de74c2df814e234e282ebc06957"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ce5e7504db95b76fc89055c7f41e367eaadef5b1d059e27e1d6eabf2b55ca314"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5aca759ada6b1967fcfd4336dcf460d02a8a23e6abe06e90ea7881e5c22c4de6"}, + {file = "rpds_py-0.10.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b5d4bdd697195f3876d134101c40c7d06d46c6ab25159ed5cbd44105c715278a"}, + {file = "rpds_py-0.10.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a657250807b6efd19b28f5922520ae002a54cb43c2401e6f3d0230c352564d25"}, + {file = "rpds_py-0.10.3-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:177c9dd834cdf4dc39c27436ade6fdf9fe81484758885f2d616d5d03c0a83bd2"}, + {file = "rpds_py-0.10.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e22491d25f97199fc3581ad8dd8ce198d8c8fdb8dae80dea3512e1ce6d5fa99f"}, + {file = "rpds_py-0.10.3-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:2f3e1867dd574014253b4b8f01ba443b9c914e61d45f3674e452a915d6e929a3"}, + {file = "rpds_py-0.10.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c22211c165166de6683de8136229721f3d5c8606cc2c3d1562da9a3a5058049c"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40bc802a696887b14c002edd43c18082cb7b6f9ee8b838239b03b56574d97f71"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5e271dd97c7bb8eefda5cca38cd0b0373a1fea50f71e8071376b46968582af9b"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:95cde244e7195b2c07ec9b73fa4c5026d4a27233451485caa1cd0c1b55f26dbd"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08a80cf4884920863623a9ee9a285ee04cef57ebedc1cc87b3e3e0f24c8acfe5"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:763ad59e105fca09705d9f9b29ecffb95ecdc3b0363be3bb56081b2c6de7977a"}, + {file = "rpds_py-0.10.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:187700668c018a7e76e89424b7c1042f317c8df9161f00c0c903c82b0a8cac5c"}, + {file = "rpds_py-0.10.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:5267cfda873ad62591b9332fd9472d2409f7cf02a34a9c9cb367e2c0255994bf"}, + {file = "rpds_py-0.10.3-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:2ed83d53a8c5902ec48b90b2ac045e28e1698c0bea9441af9409fc844dc79496"}, + {file = "rpds_py-0.10.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:255f1a10ae39b52122cce26ce0781f7a616f502feecce9e616976f6a87992d6b"}, + {file = "rpds_py-0.10.3-cp38-none-win32.whl", hash = "sha256:a019a344312d0b1f429c00d49c3be62fa273d4a1094e1b224f403716b6d03be1"}, + {file = "rpds_py-0.10.3-cp38-none-win_amd64.whl", hash = "sha256:efb9ece97e696bb56e31166a9dd7919f8f0c6b31967b454718c6509f29ef6fee"}, + {file = "rpds_py-0.10.3-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:570cc326e78ff23dec7f41487aa9c3dffd02e5ee9ab43a8f6ccc3df8f9327623"}, + {file = "rpds_py-0.10.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cff7351c251c7546407827b6a37bcef6416304fc54d12d44dbfecbb717064717"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:177914f81f66c86c012311f8c7f46887ec375cfcfd2a2f28233a3053ac93a569"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:448a66b8266de0b581246ca7cd6a73b8d98d15100fb7165974535fa3b577340e"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bbac1953c17252f9cc675bb19372444aadf0179b5df575ac4b56faaec9f6294"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9dd9d9d9e898b9d30683bdd2b6c1849449158647d1049a125879cb397ee9cd12"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e8c71ea77536149e36c4c784f6d420ffd20bea041e3ba21ed021cb40ce58e2c9"}, + {file = "rpds_py-0.10.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:16a472300bc6c83fe4c2072cc22b3972f90d718d56f241adabc7ae509f53f154"}, + {file = "rpds_py-0.10.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:b9255e7165083de7c1d605e818025e8860636348f34a79d84ec533546064f07e"}, + {file = "rpds_py-0.10.3-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:53d7a3cd46cdc1689296348cb05ffd4f4280035770aee0c8ead3bbd4d6529acc"}, + {file = "rpds_py-0.10.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:22da15b902f9f8e267020d1c8bcfc4831ca646fecb60254f7bc71763569f56b1"}, + {file = "rpds_py-0.10.3-cp39-none-win32.whl", hash = "sha256:850c272e0e0d1a5c5d73b1b7871b0a7c2446b304cec55ccdb3eaac0d792bb065"}, + {file = "rpds_py-0.10.3-cp39-none-win_amd64.whl", hash = "sha256:de61e424062173b4f70eec07e12469edde7e17fa180019a2a0d75c13a5c5dc57"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:af247fd4f12cca4129c1b82090244ea5a9d5bb089e9a82feb5a2f7c6a9fe181d"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:3ad59efe24a4d54c2742929001f2d02803aafc15d6d781c21379e3f7f66ec842"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642ed0a209ced4be3a46f8cb094f2d76f1f479e2a1ceca6de6346a096cd3409d"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:37d0c59548ae56fae01c14998918d04ee0d5d3277363c10208eef8c4e2b68ed6"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aad6ed9e70ddfb34d849b761fb243be58c735be6a9265b9060d6ddb77751e3e8"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8f94fdd756ba1f79f988855d948ae0bad9ddf44df296770d9a58c774cfbcca72"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77076bdc8776a2b029e1e6ffbe6d7056e35f56f5e80d9dc0bad26ad4a024a762"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:87d9b206b1bd7a0523375dc2020a6ce88bca5330682ae2fe25e86fd5d45cea9c"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:8efaeb08ede95066da3a3e3c420fcc0a21693fcd0c4396d0585b019613d28515"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:a4d9bfda3f84fc563868fe25ca160c8ff0e69bc4443c5647f960d59400ce6557"}, + {file = "rpds_py-0.10.3-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:d27aa6bbc1f33be920bb7adbb95581452cdf23005d5611b29a12bb6a3468cc95"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:ed8313809571a5463fd7db43aaca68ecb43ca7a58f5b23b6e6c6c5d02bdc7882"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:e10e6a1ed2b8661201e79dff5531f8ad4cdd83548a0f81c95cf79b3184b20c33"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:015de2ce2af1586ff5dc873e804434185199a15f7d96920ce67e50604592cae9"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ae87137951bb3dc08c7d8bfb8988d8c119f3230731b08a71146e84aaa919a7a9"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0bb4f48bd0dd18eebe826395e6a48b7331291078a879295bae4e5d053be50d4c"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:09362f86ec201288d5687d1dc476b07bf39c08478cde837cb710b302864e7ec9"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:821392559d37759caa67d622d0d2994c7a3f2fb29274948ac799d496d92bca73"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7170cbde4070dc3c77dec82abf86f3b210633d4f89550fa0ad2d4b549a05572a"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:5de11c041486681ce854c814844f4ce3282b6ea1656faae19208ebe09d31c5b8"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-musllinux_1_2_i686.whl", hash = "sha256:4ed172d0c79f156c1b954e99c03bc2e3033c17efce8dd1a7c781bc4d5793dfac"}, + {file = "rpds_py-0.10.3-pp38-pypy38_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:11fdd1192240dda8d6c5d18a06146e9045cb7e3ba7c06de6973000ff035df7c6"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:f602881d80ee4228a2355c68da6b296a296cd22bbb91e5418d54577bbf17fa7c"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:691d50c99a937709ac4c4cd570d959a006bd6a6d970a484c84cc99543d4a5bbb"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24cd91a03543a0f8d09cb18d1cb27df80a84b5553d2bd94cba5979ef6af5c6e7"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fc2200e79d75b5238c8d69f6a30f8284290c777039d331e7340b6c17cad24a5a"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ea65b59882d5fa8c74a23f8960db579e5e341534934f43f3b18ec1839b893e41"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:829e91f3a8574888b73e7a3feb3b1af698e717513597e23136ff4eba0bc8387a"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eab75a8569a095f2ad470b342f2751d9902f7944704f0571c8af46bede438475"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061c3ff1f51ecec256e916cf71cc01f9975af8fb3af9b94d3c0cc8702cfea637"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:39d05e65f23a0fe897b6ac395f2a8d48c56ac0f583f5d663e0afec1da89b95da"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:4eca20917a06d2fca7628ef3c8b94a8c358f6b43f1a621c9815243462dcccf97"}, + {file = "rpds_py-0.10.3-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:e8d0f0eca087630d58b8c662085529781fd5dc80f0a54eda42d5c9029f812599"}, + {file = "rpds_py-0.10.3.tar.gz", hash = "sha256:fcc1ebb7561a3e24a6588f7c6ded15d80aec22c66a070c757559b57b17ffd1cb"}, +] + +[[package]] +name = "ruff" +version = "0.0.286" +description = "An extremely fast Python linter, written in Rust." +optional = false +python-versions = ">=3.7" +files = [ + {file = "ruff-0.0.286-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:8e22cb557e7395893490e7f9cfea1073d19a5b1dd337f44fd81359b2767da4e9"}, + {file = "ruff-0.0.286-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:68ed8c99c883ae79a9133cb1a86d7130feee0397fdf5ba385abf2d53e178d3fa"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8301f0bb4ec1a5b29cfaf15b83565136c47abefb771603241af9d6038f8981e8"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:acc4598f810bbc465ce0ed84417ac687e392c993a84c7eaf3abf97638701c1ec"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88c8e358b445eb66d47164fa38541cfcc267847d1e7a92dd186dddb1a0a9a17f"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0433683d0c5dbcf6162a4beb2356e820a593243f1fa714072fec15e2e4f4c939"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddb61a0c4454cbe4623f4a07fef03c5ae921fe04fede8d15c6e36703c0a73b07"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:47549c7c0be24c8ae9f2bce6f1c49fbafea83bca80142d118306f08ec7414041"}, + {file = "ruff-0.0.286-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:559aa793149ac23dc4310f94f2c83209eedb16908a0343663be19bec42233d25"}, + {file = "ruff-0.0.286-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:d73cfb1c3352e7aa0ce6fb2321f36fa1d4a2c48d2ceac694cb03611ddf0e4db6"}, + {file = "ruff-0.0.286-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:3dad93b1f973c6d1db4b6a5da8690c5625a3fa32bdf38e543a6936e634b83dc3"}, + {file = "ruff-0.0.286-py3-none-musllinux_1_2_i686.whl", hash = "sha256:26afc0851f4fc3738afcf30f5f8b8612a31ac3455cb76e611deea80f5c0bf3ce"}, + {file = "ruff-0.0.286-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:9b6b116d1c4000de1b9bf027131dbc3b8a70507788f794c6b09509d28952c512"}, + {file = "ruff-0.0.286-py3-none-win32.whl", hash = "sha256:556e965ac07c1e8c1c2d759ac512e526ecff62c00fde1a046acb088d3cbc1a6c"}, + {file = "ruff-0.0.286-py3-none-win_amd64.whl", hash = "sha256:5d295c758961376c84aaa92d16e643d110be32add7465e197bfdaec5a431a107"}, + {file = "ruff-0.0.286-py3-none-win_arm64.whl", hash = "sha256:1d6142d53ab7f164204b3133d053c4958d4d11ec3a39abf23a40b13b0784e3f0"}, + {file = "ruff-0.0.286.tar.gz", hash = "sha256:f1e9d169cce81a384a26ee5bb8c919fe9ae88255f39a1a69fd1ebab233a85ed2"}, +] + +[[package]] +name = "scikit-learn" +version = "1.3.1" +description = "A set of python modules for machine learning and data mining" +optional = false +python-versions = ">=3.8" +files = [ + {file = "scikit-learn-1.3.1.tar.gz", hash = "sha256:1a231cced3ee3fa04756b4a7ab532dc9417acd581a330adff5f2c01ac2831fcf"}, + {file = "scikit_learn-1.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3153612ff8d36fa4e35ef8b897167119213698ea78f3fd130b4068e6f8d2da5a"}, + {file = "scikit_learn-1.3.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:6bb9490fdb8e7e00f1354621689187bef3cab289c9b869688f805bf724434755"}, + {file = "scikit_learn-1.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a7135a03af71138669f19bc96e7d0cc8081aed4b3565cc3b131135d65fc642ba"}, + {file = "scikit_learn-1.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d8dee8c1f40eeba49a85fe378bdf70a07bb64aba1a08fda1e0f48d27edfc3e6"}, + {file = "scikit_learn-1.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:4d379f2b34096105a96bd857b88601dffe7389bd55750f6f29aaa37bc6272eb5"}, + {file = "scikit_learn-1.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:14e8775eba072ab10866a7e0596bc9906873e22c4c370a651223372eb62de180"}, + {file = "scikit_learn-1.3.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:58b0c2490eff8355dc26e884487bf8edaccf2ba48d09b194fb2f3a026dd64f9d"}, + {file = "scikit_learn-1.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f66eddfda9d45dd6cadcd706b65669ce1df84b8549875691b1f403730bdef217"}, + {file = "scikit_learn-1.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c6448c37741145b241eeac617028ba6ec2119e1339b1385c9720dae31367f2be"}, + {file = "scikit_learn-1.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:c413c2c850241998168bbb3bd1bb59ff03b1195a53864f0b80ab092071af6028"}, + {file = "scikit_learn-1.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:52b77cc08bd555969ec5150788ed50276f5ef83abb72e6f469c5b91a0009bbca"}, + {file = "scikit_learn-1.3.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:a683394bc3f80b7c312c27f9b14ebea7766b1f0a34faf1a2e9158d80e860ec26"}, + {file = "scikit_learn-1.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a15d964d9eb181c79c190d3dbc2fff7338786bf017e9039571418a1d53dab236"}, + {file = "scikit_learn-1.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ce9233cdf0cdcf0858a5849d306490bf6de71fa7603a3835124e386e62f2311"}, + {file = "scikit_learn-1.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:1ec668ce003a5b3d12d020d2cde0abd64b262ac5f098b5c84cf9657deb9996a8"}, + {file = "scikit_learn-1.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ccbbedae99325628c1d1cbe3916b7ef58a1ce949672d8d39c8b190e10219fd32"}, + {file = "scikit_learn-1.3.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:845f81c7ceb4ea6bac64ab1c9f2ce8bef0a84d0f21f3bece2126adcc213dfecd"}, + {file = "scikit_learn-1.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8454d57a22d856f1fbf3091bd86f9ebd4bff89088819886dc0c72f47a6c30652"}, + {file = "scikit_learn-1.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d993fb70a1d78c9798b8f2f28705bfbfcd546b661f9e2e67aa85f81052b9c53"}, + {file = "scikit_learn-1.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:66f7bb1fec37d65f4ef85953e1df5d3c98a0f0141d394dcdaead5a6de9170347"}, +] + +[package.dependencies] +joblib = ">=1.1.1" +numpy = ">=1.17.3,<2.0" +scipy = ">=1.5.0" +threadpoolctl = ">=2.0.0" + +[package.extras] +benchmark = ["matplotlib (>=3.1.3)", "memory-profiler (>=0.57.0)", "pandas (>=1.0.5)"] +docs = ["Pillow (>=7.1.2)", "matplotlib (>=3.1.3)", "memory-profiler (>=0.57.0)", "numpydoc (>=1.2.0)", "pandas (>=1.0.5)", "plotly (>=5.14.0)", "pooch (>=1.6.0)", "scikit-image (>=0.16.2)", "seaborn (>=0.9.0)", "sphinx (>=6.0.0)", "sphinx-copybutton (>=0.5.2)", "sphinx-gallery (>=0.10.1)", "sphinx-prompt (>=1.3.0)", "sphinxext-opengraph (>=0.4.2)"] +examples = ["matplotlib (>=3.1.3)", "pandas (>=1.0.5)", "plotly (>=5.14.0)", "pooch (>=1.6.0)", "scikit-image (>=0.16.2)", "seaborn (>=0.9.0)"] +tests = ["black (>=23.3.0)", "matplotlib (>=3.1.3)", "mypy (>=1.3)", "numpydoc (>=1.2.0)", "pandas (>=1.0.5)", "pooch (>=1.6.0)", "pyamg (>=4.0.0)", "pytest (>=7.1.2)", "pytest-cov (>=2.9.0)", "ruff (>=0.0.272)", "scikit-image (>=0.16.2)"] + +[[package]] +name = "scipy" +version = "1.10.1" +description = "Fundamental algorithms for scientific computing in Python" +optional = false +python-versions = "<3.12,>=3.8" +files = [ + {file = "scipy-1.10.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e7354fd7527a4b0377ce55f286805b34e8c54b91be865bac273f527e1b839019"}, + {file = "scipy-1.10.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:4b3f429188c66603a1a5c549fb414e4d3bdc2a24792e061ffbd607d3d75fd84e"}, + {file = "scipy-1.10.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1553b5dcddd64ba9a0d95355e63fe6c3fc303a8fd77c7bc91e77d61363f7433f"}, + {file = "scipy-1.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c0ff64b06b10e35215abce517252b375e580a6125fd5fdf6421b98efbefb2d2"}, + {file = "scipy-1.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:fae8a7b898c42dffe3f7361c40d5952b6bf32d10c4569098d276b4c547905ee1"}, + {file = "scipy-1.10.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f1564ea217e82c1bbe75ddf7285ba0709ecd503f048cb1236ae9995f64217bd"}, + {file = "scipy-1.10.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:d925fa1c81b772882aa55bcc10bf88324dadb66ff85d548c71515f6689c6dac5"}, + {file = "scipy-1.10.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaea0a6be54462ec027de54fca511540980d1e9eea68b2d5c1dbfe084797be35"}, + {file = "scipy-1.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15a35c4242ec5f292c3dd364a7c71a61be87a3d4ddcc693372813c0b73c9af1d"}, + {file = "scipy-1.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:43b8e0bcb877faf0abfb613d51026cd5cc78918e9530e375727bf0625c82788f"}, + {file = "scipy-1.10.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5678f88c68ea866ed9ebe3a989091088553ba12c6090244fdae3e467b1139c35"}, + {file = "scipy-1.10.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:39becb03541f9e58243f4197584286e339029e8908c46f7221abeea4b749fa88"}, + {file = "scipy-1.10.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bce5869c8d68cf383ce240e44c1d9ae7c06078a9396df68ce88a1230f93a30c1"}, + {file = "scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:07c3457ce0b3ad5124f98a86533106b643dd811dd61b548e78cf4c8786652f6f"}, + {file = "scipy-1.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:049a8bbf0ad95277ffba9b3b7d23e5369cc39e66406d60422c8cfef40ccc8415"}, + {file = "scipy-1.10.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cd9f1027ff30d90618914a64ca9b1a77a431159df0e2a195d8a9e8a04c78abf9"}, + {file = "scipy-1.10.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:79c8e5a6c6ffaf3a2262ef1be1e108a035cf4f05c14df56057b64acc5bebffb6"}, + {file = "scipy-1.10.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51af417a000d2dbe1ec6c372dfe688e041a7084da4fdd350aeb139bd3fb55353"}, + {file = "scipy-1.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b4735d6c28aad3cdcf52117e0e91d6b39acd4272f3f5cd9907c24ee931ad601"}, + {file = "scipy-1.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:7ff7f37b1bf4417baca958d254e8e2875d0cc23aaadbe65b3d5b3077b0eb23ea"}, + {file = "scipy-1.10.1.tar.gz", hash = "sha256:2cf9dfb80a7b4589ba4c40ce7588986d6d5cebc5457cad2c2880f6bc2d42f3a5"}, +] + +[package.dependencies] +numpy = ">=1.19.5,<1.27.0" + +[package.extras] +dev = ["click", "doit (>=0.36.0)", "flake8", "mypy", "pycodestyle", "pydevtool", "rich-click", "typing_extensions"] +doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-design (>=0.2.0)"] +test = ["asv", "gmpy2", "mpmath", "pooch", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"] + +[[package]] +name = "send2trash" +version = "1.8.2" +description = "Send file to trash natively under Mac OS X, Windows and Linux" +optional = true +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" +files = [ + {file = "Send2Trash-1.8.2-py3-none-any.whl", hash = "sha256:a384719d99c07ce1eefd6905d2decb6f8b7ed054025bb0e618919f945de4f679"}, + {file = "Send2Trash-1.8.2.tar.gz", hash = "sha256:c132d59fa44b9ca2b1699af5c86f57ce9f4c5eb56629d5d55fbb7a35f84e2312"}, +] + +[package.extras] +nativelib = ["pyobjc-framework-Cocoa", "pywin32"] +objc = ["pyobjc-framework-Cocoa"] +win32 = ["pywin32"] + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] + +[[package]] +name = "sniffio" +version = "1.3.0" +description = "Sniff out which async library your code is running under" +optional = true +python-versions = ">=3.7" +files = [ + {file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"}, + {file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"}, +] + +[[package]] +name = "snowballstemmer" +version = "2.2.0" +description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms." +optional = false +python-versions = "*" +files = [ + {file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"}, + {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"}, +] + +[[package]] +name = "soupsieve" +version = "2.5" +description = "A modern CSS selector implementation for Beautiful Soup." +optional = false +python-versions = ">=3.8" +files = [ + {file = "soupsieve-2.5-py3-none-any.whl", hash = "sha256:eaa337ff55a1579b6549dc679565eac1e3d000563bcb1c8ab0d0fefbc0c2cdc7"}, + {file = "soupsieve-2.5.tar.gz", hash = "sha256:5663d5a7b3bfaeee0bc4372e7fc48f9cff4940b3eec54a6451cc5299f1097690"}, +] + +[[package]] +name = "sphinx" +version = "7.1.2" +description = "Python documentation generator" +optional = false +python-versions = ">=3.8" +files = [ + {file = "sphinx-7.1.2-py3-none-any.whl", hash = "sha256:d170a81825b2fcacb6dfd5a0d7f578a053e45d3f2b153fecc948c37344eb4cbe"}, + {file = "sphinx-7.1.2.tar.gz", hash = "sha256:780f4d32f1d7d1126576e0e5ecc19dc32ab76cd24e950228dcf7b1f6d3d9e22f"}, +] + +[package.dependencies] +alabaster = ">=0.7,<0.8" +babel = ">=2.9" +colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} +docutils = ">=0.18.1,<0.21" +imagesize = ">=1.3" +importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""} +Jinja2 = ">=3.0" +packaging = ">=21.0" +Pygments = ">=2.13" +requests = ">=2.25.0" +snowballstemmer = ">=2.0" +sphinxcontrib-applehelp = "*" +sphinxcontrib-devhelp = "*" +sphinxcontrib-htmlhelp = ">=2.0.0" +sphinxcontrib-jsmath = "*" +sphinxcontrib-qthelp = "*" +sphinxcontrib-serializinghtml = ">=1.1.5" + +[package.extras] +docs = ["sphinxcontrib-websupport"] +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"] +test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"] + +[[package]] +name = "sphinx-pyproject" +version = "0.3.0" +description = "Move some of your Sphinx configuration into pyproject.toml" +optional = false +python-versions = ">=3.6" +files = [ + {file = "sphinx_pyproject-0.3.0-py3-none-any.whl", hash = "sha256:3aca968919f5ecd390f96874c3f64a43c9c7fcfdc2fd4191a781ad9228501b52"}, + {file = "sphinx_pyproject-0.3.0.tar.gz", hash = "sha256:efc4ee9d96f579c4e4ed1ac273868c64565e88c8e37fe6ec2dc59fbcd57684ab"}, +] + +[package.dependencies] +dom-toml = ">=0.3.0" +domdf-python-tools = ">=2.7.0" + +[[package]] +name = "sphinx-rtd-theme" +version = "1.3.0" +description = "Read the Docs theme for Sphinx" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" +files = [ + {file = "sphinx_rtd_theme-1.3.0-py2.py3-none-any.whl", hash = "sha256:46ddef89cc2416a81ecfbeaceab1881948c014b1b6e4450b815311a89fb977b0"}, + {file = "sphinx_rtd_theme-1.3.0.tar.gz", hash = "sha256:590b030c7abb9cf038ec053b95e5380b5c70d61591eb0b552063fbe7c41f0931"}, +] + +[package.dependencies] +docutils = "<0.19" +sphinx = ">=1.6,<8" +sphinxcontrib-jquery = ">=4,<5" + +[package.extras] +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"] + +[[package]] +name = "sphinxcontrib-applehelp" +version = "1.0.4" +description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books" +optional = false +python-versions = ">=3.8" +files = [ + {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"}, + {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"}, +] + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-devhelp" +version = "1.0.2" +description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document." +optional = false +python-versions = ">=3.5" +files = [ + {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"}, + {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"}, +] + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-htmlhelp" +version = "2.0.1" +description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files" +optional = false +python-versions = ">=3.8" +files = [ + {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"}, + {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"}, +] + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["html5lib", "pytest"] + +[[package]] +name = "sphinxcontrib-jquery" +version = "4.1" +description = "Extension to include jQuery on newer Sphinx releases" +optional = false +python-versions = ">=2.7" +files = [ + {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"}, + {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"}, +] + +[package.dependencies] +Sphinx = ">=1.8" + +[[package]] +name = "sphinxcontrib-jsmath" +version = "1.0.1" +description = "A sphinx extension which renders display math in HTML via JavaScript" +optional = false +python-versions = ">=3.5" +files = [ + {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"}, + {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"}, +] + +[package.extras] +test = ["flake8", "mypy", "pytest"] + +[[package]] +name = "sphinxcontrib-qthelp" +version = "1.0.3" +description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document." +optional = false +python-versions = ">=3.5" +files = [ + {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"}, + {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"}, +] + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-serializinghtml" +version = "1.1.5" +description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)." +optional = false +python-versions = ">=3.5" +files = [ + {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"}, + {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"}, +] + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "stack-data" +version = "0.6.2" +description = "Extract data from python stack frames and tracebacks for informative displays" +optional = false +python-versions = "*" +files = [ + {file = "stack_data-0.6.2-py3-none-any.whl", hash = "sha256:cbb2a53eb64e5785878201a97ed7c7b94883f48b87bfb0bbe8b623c74679e4a8"}, + {file = "stack_data-0.6.2.tar.gz", hash = "sha256:32d2dd0376772d01b6cb9fc996f3c8b57a357089dec328ed4b6553d037eaf815"}, +] + +[package.dependencies] +asttokens = ">=2.1.0" +executing = ">=1.2.0" +pure-eval = "*" + +[package.extras] +tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"] + +[[package]] +name = "terminado" +version = "0.17.1" +description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library." +optional = true +python-versions = ">=3.7" +files = [ + {file = "terminado-0.17.1-py3-none-any.whl", hash = "sha256:8650d44334eba354dd591129ca3124a6ba42c3d5b70df5051b6921d506fdaeae"}, + {file = "terminado-0.17.1.tar.gz", hash = "sha256:6ccbbcd3a4f8a25a5ec04991f39a0b8db52dfcd487ea0e578d977e6752380333"}, +] + +[package.dependencies] +ptyprocess = {version = "*", markers = "os_name != \"nt\""} +pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""} +tornado = ">=6.1.0" + +[package.extras] +docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"] +test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"] + +[[package]] +name = "threadpoolctl" +version = "3.2.0" +description = "threadpoolctl" +optional = false +python-versions = ">=3.8" +files = [ + {file = "threadpoolctl-3.2.0-py3-none-any.whl", hash = "sha256:2b7818516e423bdaebb97c723f86a7c6b0a83d3f3b0970328d66f4d9104dc032"}, + {file = "threadpoolctl-3.2.0.tar.gz", hash = "sha256:c96a0ba3bdddeaca37dc4cc7344aafad41cdb8c313f74fdfe387a867bba93355"}, +] + +[[package]] +name = "tinycss2" +version = "1.2.1" +description = "A tiny CSS parser" +optional = false +python-versions = ">=3.7" +files = [ + {file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"}, + {file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"}, +] + +[package.dependencies] +webencodings = ">=0.4" + +[package.extras] +doc = ["sphinx", "sphinx_rtd_theme"] +test = ["flake8", "isort", "pytest"] + +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] + +[[package]] +name = "tomli" +version = "2.0.1" +description = "A lil' TOML parser" +optional = false +python-versions = ">=3.7" +files = [ + {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, + {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, +] + +[[package]] +name = "tomlkit" +version = "0.12.1" +description = "Style preserving TOML library" +optional = false +python-versions = ">=3.7" +files = [ + {file = "tomlkit-0.12.1-py3-none-any.whl", hash = "sha256:712cbd236609acc6a3e2e97253dfc52d4c2082982a88f61b640ecf0817eab899"}, + {file = "tomlkit-0.12.1.tar.gz", hash = "sha256:38e1ff8edb991273ec9f6181244a6a391ac30e9f5098e7535640ea6be97a7c86"}, +] + +[[package]] +name = "tornado" +version = "6.3.3" +description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed." +optional = false +python-versions = ">= 3.8" +files = [ + {file = "tornado-6.3.3-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:502fba735c84450974fec147340016ad928d29f1e91f49be168c0a4c18181e1d"}, + {file = "tornado-6.3.3-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:805d507b1f588320c26f7f097108eb4023bbaa984d63176d1652e184ba24270a"}, + {file = "tornado-6.3.3-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1bd19ca6c16882e4d37368e0152f99c099bad93e0950ce55e71daed74045908f"}, + {file = "tornado-6.3.3-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7ac51f42808cca9b3613f51ffe2a965c8525cb1b00b7b2d56828b8045354f76a"}, + {file = "tornado-6.3.3-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:71a8db65160a3c55d61839b7302a9a400074c9c753040455494e2af74e2501f2"}, + {file = "tornado-6.3.3-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:ceb917a50cd35882b57600709dd5421a418c29ddc852da8bcdab1f0db33406b0"}, + {file = "tornado-6.3.3-cp38-abi3-musllinux_1_1_i686.whl", hash = "sha256:7d01abc57ea0dbb51ddfed477dfe22719d376119844e33c661d873bf9c0e4a16"}, + {file = "tornado-6.3.3-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:9dc4444c0defcd3929d5c1eb5706cbe1b116e762ff3e0deca8b715d14bf6ec17"}, + {file = "tornado-6.3.3-cp38-abi3-win32.whl", hash = "sha256:65ceca9500383fbdf33a98c0087cb975b2ef3bfb874cb35b8de8740cf7f41bd3"}, + {file = "tornado-6.3.3-cp38-abi3-win_amd64.whl", hash = "sha256:22d3c2fa10b5793da13c807e6fc38ff49a4f6e1e3868b0a6f4164768bb8e20f5"}, + {file = "tornado-6.3.3.tar.gz", hash = "sha256:e7d8db41c0181c80d76c982aacc442c0783a2c54d6400fe028954201a2e032fe"}, +] + +[[package]] +name = "traitlets" +version = "5.10.0" +description = "Traitlets Python configuration system" +optional = false +python-versions = ">=3.8" +files = [ + {file = "traitlets-5.10.0-py3-none-any.whl", hash = "sha256:417745a96681fbb358e723d5346a547521f36e9bd0d50ba7ab368fff5d67aa54"}, + {file = "traitlets-5.10.0.tar.gz", hash = "sha256:f584ea209240466e66e91f3c81aa7d004ba4cf794990b0c775938a1544217cd1"}, +] + +[package.extras] +docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"] +test = ["argcomplete (>=3.0.3)", "mypy (>=1.5.1)", "pre-commit", "pytest (>=7.0,<7.5)", "pytest-mock", "pytest-mypy-testing"] + +[[package]] +name = "typing-extensions" +version = "4.8.0" +description = "Backported and Experimental Type Hints for Python 3.8+" +optional = false +python-versions = ">=3.8" +files = [ + {file = "typing_extensions-4.8.0-py3-none-any.whl", hash = "sha256:8f92fc8806f9a6b641eaa5318da32b44d401efaac0f6678c9bc448ba3605faa0"}, + {file = "typing_extensions-4.8.0.tar.gz", hash = "sha256:df8e4339e9cb77357558cbdbceca33c303714cf861d1eef15e1070055ae8b7ef"}, +] + +[[package]] +name = "uri-template" +version = "1.3.0" +description = "RFC 6570 URI Template Processor" +optional = true +python-versions = ">=3.7" +files = [ + {file = "uri-template-1.3.0.tar.gz", hash = "sha256:0e00f8eb65e18c7de20d595a14336e9f337ead580c70934141624b6d1ffdacc7"}, + {file = "uri_template-1.3.0-py3-none-any.whl", hash = "sha256:a44a133ea12d44a0c0f06d7d42a52d71282e77e2f937d8abd5655b8d56fc1363"}, +] + +[package.extras] +dev = ["flake8", "flake8-annotations", "flake8-bandit", "flake8-bugbear", "flake8-commas", "flake8-comprehensions", "flake8-continuation", "flake8-datetimez", "flake8-docstrings", "flake8-import-order", "flake8-literal", "flake8-modern-annotations", "flake8-noqa", "flake8-pyproject", "flake8-requirements", "flake8-typechecking-import", "flake8-use-fstring", "mypy", "pep8-naming", "types-PyYAML"] + +[[package]] +name = "urllib3" +version = "2.0.5" +description = "HTTP library with thread-safe connection pooling, file post, and more." +optional = false +python-versions = ">=3.7" +files = [ + {file = "urllib3-2.0.5-py3-none-any.whl", hash = "sha256:ef16afa8ba34a1f989db38e1dbbe0c302e4289a47856990d0682e374563ce35e"}, + {file = "urllib3-2.0.5.tar.gz", hash = "sha256:13abf37382ea2ce6fb744d4dad67838eec857c9f4f57009891805e0b5e123594"}, +] + +[package.extras] +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] + +[[package]] +name = "virtualenv" +version = "20.24.5" +description = "Virtual Python Environment builder" +optional = false +python-versions = ">=3.7" +files = [ + {file = "virtualenv-20.24.5-py3-none-any.whl", hash = "sha256:b80039f280f4919c77b30f1c23294ae357c4c8701042086e3fc005963e4e537b"}, + {file = "virtualenv-20.24.5.tar.gz", hash = "sha256:e8361967f6da6fbdf1426483bfe9fca8287c242ac0bc30429905721cefbff752"}, +] + +[package.dependencies] +distlib = ">=0.3.7,<1" +filelock = ">=3.12.2,<4" +platformdirs = ">=3.9.1,<4" + +[package.extras] +docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"] +test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"] + +[[package]] +name = "wcwidth" +version = "0.2.6" +description = "Measures the displayed width of unicode strings in a terminal" +optional = false +python-versions = "*" +files = [ + {file = "wcwidth-0.2.6-py2.py3-none-any.whl", hash = "sha256:795b138f6875577cd91bba52baf9e445cd5118fd32723b460e30a0af30ea230e"}, + {file = "wcwidth-0.2.6.tar.gz", hash = "sha256:a5220780a404dbe3353789870978e472cfe477761f06ee55077256e509b156d0"}, +] + +[[package]] +name = "webcolors" +version = "1.13" +description = "A library for working with the color formats defined by HTML and CSS." +optional = true +python-versions = ">=3.7" +files = [ + {file = "webcolors-1.13-py3-none-any.whl", hash = "sha256:29bc7e8752c0a1bd4a1f03c14d6e6a72e93d82193738fa860cbff59d0fcc11bf"}, + {file = "webcolors-1.13.tar.gz", hash = "sha256:c225b674c83fa923be93d235330ce0300373d02885cef23238813b0d5668304a"}, +] + +[package.extras] +docs = ["furo", "sphinx", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-notfound-page", "sphinxext-opengraph"] +tests = ["pytest", "pytest-cov"] + +[[package]] +name = "webencodings" +version = "0.5.1" +description = "Character encoding aliases for legacy web content" +optional = false +python-versions = "*" +files = [ + {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"}, + {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"}, +] + +[[package]] +name = "websocket-client" +version = "1.6.3" +description = "WebSocket client for Python with low level API options" +optional = true +python-versions = ">=3.8" +files = [ + {file = "websocket-client-1.6.3.tar.gz", hash = "sha256:3aad25d31284266bcfcfd1fd8a743f63282305a364b8d0948a43bd606acc652f"}, + {file = "websocket_client-1.6.3-py3-none-any.whl", hash = "sha256:6cfc30d051ebabb73a5fa246efdcc14c8fbebbd0330f8984ac3bb6d9edd2ad03"}, +] + +[package.extras] +docs = ["Sphinx (>=6.0)", "sphinx-rtd-theme (>=1.1.0)"] +optional = ["python-socks", "wsaccel"] +test = ["websockets"] + +[[package]] +name = "widgetsnbextension" +version = "4.0.9" +description = "Jupyter interactive widgets for Jupyter Notebook" +optional = true +python-versions = ">=3.7" +files = [ + {file = "widgetsnbextension-4.0.9-py3-none-any.whl", hash = "sha256:91452ca8445beb805792f206e560c1769284267a30ceb1cec9f5bcc887d15175"}, + {file = "widgetsnbextension-4.0.9.tar.gz", hash = "sha256:3c1f5e46dc1166dfd40a42d685e6a51396fd34ff878742a3e47c6f0cc4a2a385"}, +] + +[[package]] +name = "xmltodict" +version = "0.13.0" +description = "Makes working with XML feel like you are working with JSON" +optional = false +python-versions = ">=3.4" +files = [ + {file = "xmltodict-0.13.0-py2.py3-none-any.whl", hash = "sha256:aa89e8fd76320154a40d19a0df04a4695fb9dc5ba977cbb68ab3e4eb225e7852"}, + {file = "xmltodict-0.13.0.tar.gz", hash = "sha256:341595a488e3e01a85a9d8911d8912fd922ede5fecc4dce437eb4b6c8d037e56"}, +] + +[[package]] +name = "zipp" +version = "3.17.0" +description = "Backport of pathlib-compatible object wrapper for zip files" +optional = false +python-versions = ">=3.8" +files = [ + {file = "zipp-3.17.0-py3-none-any.whl", hash = "sha256:0e923e726174922dce09c53c59ad483ff7bbb8e572e00c7f7c46b88556409f31"}, + {file = "zipp-3.17.0.tar.gz", hash = "sha256:84e64a1c28cf7e91ed2078bb8cc8c259cb19b76942096c8d7b84947690cabaf0"}, +] + +[package.extras] +docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] +testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-ignore-flaky", "pytest-mypy (>=0.9.1)", "pytest-ruff"] + +[extras] +cuda = ["nvidia-ml-py", "pycuda", "pynvml"] +cuda-opencl = ["pycuda", "pyopencl"] +hip = ["pyhip-interface"] +opencl = ["pyopencl"] +tutorial = ["jupyter", "matplotlib"] + +[metadata] +lock-version = "2.0" +python-versions = ">=3.8,<3.12" +content-hash = "2fa5c38d1019fb4f288f75df9e84eeb411d4fae03fd3b314d8fc442f02ae5a27" diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 000000000..33643eb0e --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,140 @@ +[build-system] +requires = ["poetry-core>=1.7.0", "setuptools>=67.7.2"] +build-backend = "poetry.core.masonry.api" + +[tool.poetry] +name = "kernel_tuner" +packages = [{ include = "kernel_tuner", from = "." }] +description = "An easy to use CUDA/OpenCL kernel tuner in Python" +version = "1.0.0b1" # adhere to PEP440 versioning: https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/#id55 +license = "Apache-2.0" +authors = [ + "Ben van Werkhoven ", + "Alessio Sclocco", + "Stijn Heldens", + "Floris-Jan Willemsen ", +] +readme = "README.rst" +keywords = [ + "auto-tuning", + "gpu", + "computing", + "pycuda", + "cuda", + "pyopencl", + "opencl", +] +classifiers = [ + "Environment :: Console", + "Environment :: GPU", + "Development Status :: 5 - Production/Stable", + "Intended Audience :: Developers", + "Intended Audience :: Education", + "Intended Audience :: Science/Research", + "License :: OSI Approved :: Apache Software License", + "Natural Language :: English", + "Operating System :: MacOS :: MacOS X", + "Operating System :: POSIX :: Linux", + "Topic :: Scientific/Engineering", + "Topic :: Software Development", + "Topic :: System :: Distributed Computing", +] +include = [ + { path = "doc/source/*.ipynb" }, +] # this ensures that people won't have to clone the whole repo to include notebooks, they can just do `pip install kernel_tuner[tutorial,cuda]` +homepage = "https://KernelTuner.github.io/kernel_tuner/" +documentation = "https://KernelTuner.github.io/kernel_tuner/" +repository = "https://github.com/KernelTuner/kernel_tuner" +[tool.poetry.urls] +"Tracker" = "https://github.com/KernelTuner/kernel_tuner/issues" +[tool.poetry.build] +generate-setup-file = false + +# ATTENTION: if anything is changed here, run `poetry update` +[tool.poetry.dependencies] +python = ">=3.8,<3.12" # TODO if we drop 3.8 support, remove "from __future__ import annotations" # NOTE when changing the supported Python versions, also change the test versions in the noxfile +numpy = "^1.22.2" # set to 1.22.2 instead of 1.22.4 to match oldest-supported-numpy required by pycuda +scipy = "^1.10.1" # held back by Python 3.8 support (dropped from ^1.11) +jsonschema = "*" +python-constraint2 = "^2.0.0b3" +xmltodict = "*" +pandas = "^1.4.3" +scikit-learn = "^1.0.2" +# TODO torch is used in some places, consider adding it as an (optional) dependency + +# List of optional dependencies for user installation, e.g. `pip install kernel_tuner[cuda]`, used in the below `extras`. +# Please note that this is different from the dependency groups below, e.g. `docs` and `test`, those are for development. +# CUDA +pycuda = { version = "^2022.1", optional = true } # Attention: if pycuda is changed here, also change `session.install("pycuda")` in the Noxfile +nvidia-ml-py = { version = "*", optional = true } +pynvml = { version = "^11.4.1", optional = true } +# cupy-cuda11x = { version = "*", optional = true } # Note: these are completely optional dependencies as described in CONTRIBUTING.rst +# cupy-cuda12x = { version = "*", optional = true } +# cuda-python = { version = "*", optional = true } +# OpenCL +pyopencl = { version = "*", optional = true } # Attention: if pyopencl is changed here, also change `session.install("pyopencl")` in the Noxfile +# HIP +pyhip-interface = { version = "*", optional = true } +# Tutorial +jupyter = { version = "^1.0.0", optional = true } +matplotlib = { version = "^1.5.3", optional = true } + +[tool.poetry.extras] +cuda = ["pycuda", "nvidia-ml-py", "pynvml"] +opencl = ["pyopencl"] +cuda_opencl = ["pycuda", "pyopencl"] +hip = ["pyhip-interface"] +tutorial = ["jupyter", "matplotlib"] + +# ATTENTION: if anything is changed here, run `poetry update` +# Please note that there is overlap with the `dev` group +[tool.poetry.group.docs] +optional = true +[tool.poetry.group.docs.dependencies] +sphinx = "^7.1.2" # held back by Python 3.8 support (dropped from ^7.2) +sphinx_rtd_theme = "^1.3.0" # updated from "^0.1.9" +sphinx-pyproject = "^0.3" +nbsphinx = "^0.9" +ipython = "*" +pytest = "^7.4.0" # TODO why do we need pytest here? +markupsafe = "^2.0.1" # TODO why do we need markupsafe here? +# sphinx-autodoc-typehints = "^1.24.0" + +# ATTENTION: if anything is changed here, run `poetry update` +[tool.poetry.group.test] +optional = true +[tool.poetry.group.test.dependencies] +pytest = "^7.4.0" +pytest-cov = "^4.1.0" +mock = "^2.0.0" +nox = "^2023.4.22" +nox-poetry = "^1.0.3" +ruff = "^0.0.286" +pep440 = "^0.1.2" +tomli = "^2.0.1" # can be replaced by built-in [tomllib](https://docs.python.org/3.11/library/tomllib.html) from Python 3.11 + +# development dependencies are unused for now, as this is already covered by test and docs +# # ATTENTION: if anything is changed here, run `poetry update` +# [tool.poetry.group.dev.dependencies] + +[tool.pytest.ini_options] +minversion = "7.4" +pythonpath = [ + "kernel_tuner", +] # necessary to get coverage reports without installing with `-e` +addopts = "--cov --cov-config=.coveragerc --cov-report html --cov-report term-missing --cov-fail-under 60" +testpaths = ["test"] + +[tool.black] +line-length = 120 +[tool.ruff] +line-length = 120 +respect-gitignore = true +exclude = ["doc", "examples"] +select = [ + "E", # pycodestyle + "F", # pyflakes, + "D", # pydocstyle, +] +[tool.ruff.pydocstyle] +convention = "google" diff --git a/setup.cfg b/setup.cfg deleted file mode 100644 index 5aef279b9..000000000 --- a/setup.cfg +++ /dev/null @@ -1,2 +0,0 @@ -[metadata] -description-file = README.rst diff --git a/setup.py b/setup.py deleted file mode 100644 index cc4a64928..000000000 --- a/setup.py +++ /dev/null @@ -1,99 +0,0 @@ -import re -from setuptools import setup - - -def version(): - with open("kernel_tuner/__init__.py") as fp: - match = re.search(r"__version__\s*=\s*['\"]([^'\"]+)", fp.read()) - - if not match: - raise RuntimeError("unable to find __version__ string in __init__.py") - - return match[1] - - -def readme(): - with open("README.rst") as f: - return f.read() - - -setup( - name="kernel_tuner", - version=version(), - author="Ben van Werkhoven", - author_email="b.vanwerkhoven@esciencecenter.nl", - description=("An easy to use CUDA/OpenCL kernel tuner in Python"), - license="Apache 2.0", - keywords="auto-tuning gpu computing pycuda cuda pyopencl opencl", - url="https://KernelTuner.github.io/kernel_tuner/", - include_package_data=True, # use MANIFEST.in during install - project_urls={ - "Documentation": "https://KernelTuner.github.io/kernel_tuner/", - "Source": "https://github.com/KernelTuner/kernel_tuner", - "Tracker": "https://github.com/KernelTuner/kernel_tuner/issues", - }, - packages=[ - "kernel_tuner", - "kernel_tuner.backends", - "kernel_tuner.energy", - "kernel_tuner.observers", - "kernel_tuner.runners", - "kernel_tuner.strategies", - ], - long_description=readme(), - long_description_content_type="text/x-rst", - classifiers=[ - "Environment :: Console", - "Intended Audience :: Developers", - "Intended Audience :: Science/Research", - "Intended Audience :: Education", - "License :: OSI Approved :: Apache Software License", - "Natural Language :: English", - "Operating System :: POSIX :: Linux", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Programming Language :: Python :: 3.9", - "Topic :: Scientific/Engineering", - "Topic :: Software Development", - "Topic :: System :: Distributed Computing", - "Development Status :: 5 - Production/Stable", - ], - install_requires=[ - "numpy>=1.13.3,<1.24.0", - "scipy>=1.8.1", - "jsonschema", - "python-constraint", - "xmltodict", - ], - extras_require={ - "doc": [ - "sphinx", - "sphinx_rtd_theme", - "nbsphinx", - "pytest", - "ipython", - "markupsafe==2.0.1", - ], - "cuda": ["pycuda", "nvidia-ml-py", "pynvml>=11.4.1"], - "opencl": ["pyopencl"], - "cuda_opencl": ["pycuda", "pyopencl"], - "hip": ["pyhip-interface"], - "tutorial": ["jupyter", "matplotlib", "pandas"], - "dev": [ - "numpy>=1.13.3", - "scipy>=0.18.1", - "mock>=2.0.0", - "pytest>=3.0.3", - "Sphinx>=1.4.8", - "scikit-learn>=0.24.2", - "scikit-optimize>=0.8.1", - "sphinx-rtd-theme>=0.1.9", - "nbsphinx>=0.2.13", - "jupyter>=1.0.0", - "matplotlib>=1.5.3", - "pandas>=0.19.1", - "pylint>=1.7.1", - "bayesian-optimization>=1.0.1", - ], - }, -) diff --git a/test/strategies/test_bayesian_optimization.py b/test/strategies/test_bayesian_optimization.py index d7d7d5986..dd206a37b 100644 --- a/test/strategies/test_bayesian_optimization.py +++ b/test/strategies/test_bayesian_optimization.py @@ -1,18 +1,19 @@ -import enum import itertools -from re import L +from collections import namedtuple from random import uniform as randfloat + import numpy as np -from collections import OrderedDict, namedtuple +from pytest import raises + from kernel_tuner.interface import Options from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import bayes_opt from kernel_tuner.strategies.bayes_opt import BayesianOptimization from kernel_tuner.strategies.common import CostFunc -tune_params = OrderedDict() -tune_params["x"] = [1, 2, 3] -tune_params["y"] = [4, 5, 6] +tune_params = dict() +tune_params["x"] = [1, 2] +tune_params["y"] = [4.1, 5, 6.9] tune_params["z"] = [7] strategy_options = dict(popsize=0, max_fevals=10) @@ -75,6 +76,21 @@ def test_bo_initialization(): assert len(BO.observations) == len(pruned_parameter_space) assert BO.current_optimum == np.PINF +def test_bo_initial_sample_lhs(): + sample = BO.draw_latin_hypercube_samples(num_samples=1) + print(sample) + assert isinstance(sample, list) + assert len(sample) == 1 + assert isinstance(sample[0], tuple) + assert len(sample[0]) == 2 + assert isinstance(sample[0][0], tuple) + assert isinstance(sample[0][1], int) + assert len(sample[0][0]) == 2 # tune_params["z"] is dropped because it only has a single value + assert isinstance(sample[0][0][0], float) + samples = BO.draw_latin_hypercube_samples(num_samples=3) + assert len(samples) == 3 + with raises(ValueError): + samples = BO.draw_latin_hypercube_samples(num_samples=30) def test_bo_is_better_than(): BO.opt_direction = 'max' diff --git a/test/strategies/test_common.py b/test/strategies/test_common.py index 7bbd8f892..29ead8615 100644 --- a/test/strategies/test_common.py +++ b/test/strategies/test_common.py @@ -1,10 +1,9 @@ import sys -from collections import OrderedDict from time import perf_counter +from kernel_tuner.interface import Options from kernel_tuner.searchspace import Searchspace from kernel_tuner.strategies import common -from kernel_tuner.interface import Options from kernel_tuner.strategies.common import CostFunc try: @@ -23,7 +22,7 @@ def fake_runner(): return runner -tune_params = OrderedDict([("x", [1, 2, 3]), ("y", [4, 5, 6])]) +tune_params = dict([("x", [1, 2, 3]), ("y", [4, 5, 6])]) def test_cost_func(): @@ -32,13 +31,13 @@ def test_cost_func(): restrictions=None, strategy_options={}, cache={}, unique_results={}, objective="time", objective_higher_is_better=False, metrics=None) runner = fake_runner() - results = [] time = CostFunc(Searchspace(tune_params, None, 1024), tuning_options, runner)(x) assert time == 5 # check if restrictions are properly handled - restrictions = lambda _: False + def restrictions(_): + return False tuning_options = Options(scaling=False, snap=False, tune_params=tune_params, restrictions=restrictions, strategy_options={}, verbose=True, cache={}, unique_results={}, diff --git a/test/strategies/test_genetic_algorithm.py b/test/strategies/test_genetic_algorithm.py index b41334242..cb07f8d7f 100644 --- a/test/strategies/test_genetic_algorithm.py +++ b/test/strategies/test_genetic_algorithm.py @@ -1,9 +1,7 @@ -from collections import OrderedDict -from kernel_tuner.strategies import genetic_algorithm as ga -from kernel_tuner.interface import Options from kernel_tuner.searchspace import Searchspace +from kernel_tuner.strategies import genetic_algorithm as ga -tune_params = OrderedDict() +tune_params = dict() tune_params["x"] = [1, 2, 3] tune_params["y"] = [4, 5, 6] diff --git a/test/strategies/test_strategies.py b/test/strategies/test_strategies.py index c1b4c0936..395cf2bf9 100644 --- a/test/strategies/test_strategies.py +++ b/test/strategies/test_strategies.py @@ -1,12 +1,11 @@ -from collections import OrderedDict import os -import pytest import numpy as np +import pytest import kernel_tuner -from kernel_tuner.interface import strategy_map from kernel_tuner import util +from kernel_tuner.interface import strategy_map cache_filename = os.path.dirname(os.path.realpath(__file__)) + "/../test_cache_file.json" @@ -28,7 +27,7 @@ def vector_add(): n = np.int32(size) args = [c, a, b, n] - tune_params = OrderedDict() + tune_params = dict() tune_params["block_size_x"] = [128 + 64 * i for i in range(15)] return ["vector_add", kernel_string, size, args, tune_params] diff --git a/test/test_common.py b/test/test_common.py index c9d4bfcc5..132068843 100644 --- a/test/test_common.py +++ b/test/test_common.py @@ -1,15 +1,14 @@ -from collections import OrderedDict - import random + import numpy as np -from kernel_tuner.interface import Options import kernel_tuner.strategies.common as common +from kernel_tuner.interface import Options from kernel_tuner.searchspace import Searchspace def test_get_bounds_x0_eps(): - tune_params = OrderedDict() + tune_params = dict() tune_params['x'] = [0, 1, 2, 3, 4] searchspace = Searchspace(tune_params, [], 1024) @@ -30,7 +29,7 @@ def test_get_bounds_x0_eps(): def test_get_bounds(): - tune_params = OrderedDict() + tune_params = dict() tune_params['x'] = [0, 1, 2, 3, 4] tune_params['y'] = [i for i in range(0, 10000, 100)] tune_params['z'] = [-11.2, 55.67, 123.27] @@ -47,7 +46,7 @@ def test_get_bounds(): def test_snap_to_nearest_config(): - tune_params = OrderedDict() + tune_params = dict() tune_params['x'] = [0, 1, 2, 3, 4, 5] tune_params['y'] = [0, 1, 2, 3, 4, 5] tune_params['z'] = [0, 1, 2, 3, 4, 5] @@ -61,7 +60,7 @@ def test_snap_to_nearest_config(): def test_unscale(): - params = OrderedDict() + params = dict() params['x'] = [2**i for i in range(4, 9)] eps = 1.0 / len(params['x']) diff --git a/test/test_cuda_functions.py b/test/test_cuda_functions.py index 0709eecb3..1dc68652d 100644 --- a/test/test_cuda_functions.py +++ b/test/test_cuda_functions.py @@ -1,13 +1,12 @@ import numpy as np - -import kernel_tuner -from .context import skip_if_no_cuda -from .test_runners import env - import pytest + from kernel_tuner import tune_kernel from kernel_tuner.backends import nvcuda -from kernel_tuner.core import KernelSource, KernelInstance +from kernel_tuner.core import KernelInstance, KernelSource + +from .context import skip_if_no_cuda +from .test_runners import env # noqa: F401 try: from cuda import cuda diff --git a/test/test_cupy_functions.py b/test/test_cupy_functions.py index a505b385c..4bb4d16f4 100644 --- a/test/test_cupy_functions.py +++ b/test/test_cupy_functions.py @@ -1,7 +1,9 @@ import kernel_tuner + from .context import skip_if_no_cupy -from .test_runners import env +from .test_runners import env # noqa: F401 + @skip_if_no_cupy def test_tune_kernel(env): diff --git a/test/test_energy.py b/test/test_energy.py index f25233504..187ac1cdc 100644 --- a/test/test_energy.py +++ b/test/test_energy.py @@ -1,7 +1,8 @@ import os -from .context import skip_if_no_pycuda, skip_if_no_pynvml + from kernel_tuner.energy import energy +from .context import skip_if_no_pycuda, skip_if_no_pynvml cache_filename = os.path.dirname(os.path.realpath(__file__)) + "/synthetic_fp32_cache_NVIDIA_RTX_A4000.json" @@ -10,5 +11,6 @@ def test_create_power_frequency_model(): ridge_frequency, freqs, nvml_power, fitted_params, scaling = energy.create_power_frequency_model(cache=cache_filename, simulation_mode=True) - assert ridge_frequency == 1350 - + target_value = 1350 + tolerance = 0.05 + assert target_value * (1-tolerance) <= ridge_frequency <= target_value * (1+tolerance) diff --git a/test/test_file_utils.py b/test/test_file_utils.py index bc16939a2..e84e00da4 100644 --- a/test/test_file_utils.py +++ b/test/test_file_utils.py @@ -1,12 +1,13 @@ -from kernel_tuner.file_utils import store_output_file, store_metadata_file, output_file_schema -from kernel_tuner.util import delete_temp_file -from .test_integration import fake_results -from .test_runners import env, cache_filename, tune_kernel +import json import pytest -import json from jsonschema import validate +from kernel_tuner.file_utils import output_file_schema, store_metadata_file, store_output_file +from kernel_tuner.util import delete_temp_file + +from .test_runners import cache_filename, env, tune_kernel # noqa: F401 + def test_store_output_file(env): # setup variables diff --git a/test/test_hip_functions.py b/test/test_hip_functions.py index ce3eb0642..b55230036 100644 --- a/test/test_hip_functions.py +++ b/test/test_hip_functions.py @@ -1,15 +1,15 @@ -import numpy as np import ctypes -from .context import skip_if_no_pyhip -from collections import OrderedDict +import numpy as np import pytest -import kernel_tuner + from kernel_tuner import tune_kernel from kernel_tuner.backends import hip as kt_hip -from kernel_tuner.core import KernelSource, KernelInstance +from kernel_tuner.core import KernelInstance, KernelSource + +from .context import skip_if_no_pyhip -try: +try: from pyhip import hip, hiprtc hip_present = True except ImportError: @@ -33,12 +33,13 @@ def env(): n = np.int32(size) args = [c, a, b, n] - tune_params = OrderedDict() + tune_params = dict() tune_params["block_size_x"] = [128 + 64 * i for i in range(15)] return ["vector_add", kernel_string, size, args, tune_params] -@skip_if_no_pyhip +# @skip_if_no_pyhip +@pytest.mark.skip("Currently broken due to pull request #216, to be fixed in issue #217") def test_ready_argument_list(): size = 1000 @@ -64,11 +65,12 @@ def __getitem__(self, key): ctypes.c_int(a), b.ctypes.data_as(ctypes.POINTER(ctypes.c_float)), ctypes.c_bool(c)) - + assert(gpu_args[1] == argListStructure[1]) assert(gpu_args[3] == argListStructure[3]) -@skip_if_no_pyhip +# @skip_if_no_pyhip +@pytest.mark.skip("Currently broken due to pull request #216, to be fixed in issue #217") def test_compile(): kernel_string = """ @@ -117,7 +119,8 @@ def test_memcpy_htod(): assert all(output == x) -@skip_if_no_pyhip +# @skip_if_no_pyhip +@pytest.mark.skip("Currently broken due to pull request #216, to be fixed in issue #217") def test_copy_constant_memory_args(): kernel_string = """ __constant__ float my_constant_data[100]; @@ -141,7 +144,7 @@ def test_copy_constant_memory_args(): output = np.full(100, 0).astype(np.float32) gpu_args = dev.ready_argument_list([output]) - + threads = (100, 1, 1) grid = (1, 1, 1) dev.run_kernel(kernel, gpu_args, threads, grid) diff --git a/test/test_hyper.py b/test/test_hyper.py index b6ac83f61..9d1dc55df 100644 --- a/test/test_hyper.py +++ b/test/test_hyper.py @@ -1,13 +1,11 @@ -from collections import OrderedDict - from kernel_tuner.hyper import tune_hyper_params -from .test_runners import env, cache_filename +from .test_runners import cache_filename, env # noqa: F401 def test_hyper(env): - hyper_params = OrderedDict() + hyper_params = dict() hyper_params["popsize"] = [5] hyper_params["maxiter"] = [5, 10] hyper_params["method"] = ["uniform"] diff --git a/test/test_observers.py b/test/test_observers.py index b4b55041c..d881fed74 100644 --- a/test/test_observers.py +++ b/test/test_observers.py @@ -1,12 +1,11 @@ -import pytest import kernel_tuner from kernel_tuner.observers.nvml import NVMLObserver from kernel_tuner.observers.observer import BenchmarkObserver from .context import skip_if_no_pycuda, skip_if_no_pynvml -from .test_runners import env +from .test_runners import env # noqa: F401 @skip_if_no_pycuda diff --git a/test/test_opencl_functions.py b/test/test_opencl_functions.py index de370ae53..644c5dc08 100644 --- a/test/test_opencl_functions.py +++ b/test/test_opencl_functions.py @@ -1,11 +1,9 @@ -from collections import OrderedDict - -import pytest import numpy as np +import pytest import kernel_tuner from kernel_tuner.backends import opencl -from kernel_tuner.core import KernelSource, KernelInstance +from kernel_tuner.core import KernelInstance, KernelSource from .context import skip_if_no_opencl @@ -88,7 +86,7 @@ def env(): n = np.int32(size) args = [c, a, b, n] - tune_params = OrderedDict() + tune_params = dict() tune_params["block_size_x"] = [32, 64, 128] return ["vector_add", kernel_string, size, args, tune_params] diff --git a/test/test_runners.py b/test/test_runners.py index cb0e03c7a..527c1d252 100644 --- a/test/test_runners.py +++ b/test/test_runners.py @@ -1,12 +1,11 @@ import os import time -from collections import OrderedDict import numpy as np import pytest -from kernel_tuner import util, tune_kernel, core -from kernel_tuner.interface import Options, _kernel_options, _device_options, _tuning_options +from kernel_tuner import core, tune_kernel, util +from kernel_tuner.interface import Options, _device_options, _kernel_options, _tuning_options from kernel_tuner.runners.sequential import SequentialRunner from .context import skip_if_no_pycuda @@ -33,7 +32,7 @@ def env(): n = np.int32(size) args = [c, a, b, n] - tune_params = OrderedDict() + tune_params = dict() tune_params["block_size_x"] = [128 + 64 * i for i in range(15)] return ["vector_add", kernel_string, size, args, tune_params] @@ -262,7 +261,7 @@ def test_runner(env): iterations = 7 verbose = False objective = "GFLOP/s" - metrics = OrderedDict({objective: lambda p: 1}) + metrics = dict({objective: lambda p: 1}) opts = locals() kernel_options = Options([(k, opts.get(k, None)) for k in _kernel_options.keys()]) diff --git a/test/test_searchspace.py b/test/test_searchspace.py index 2a94a5059..e6a2e3d85 100644 --- a/test/test_searchspace.py +++ b/test/test_searchspace.py @@ -1,34 +1,35 @@ from __future__ import print_function -from collections import OrderedDict -from random import randrange + from math import ceil +from random import randrange try: from mock import patch except ImportError: from unittest.mock import patch +import numpy as np +from constraint import ExactSumConstraint, FunctionConstraint + from kernel_tuner.interface import Options from kernel_tuner.searchspace import Searchspace -from constraint import ExactSumConstraint, FunctionConstraint -import numpy as np - max_threads = 1024 value_error_expectation_message = "Expected a ValueError to be raised" -# 9 combinations without restrictions -simple_tune_params = OrderedDict() +# 16 combinations, of 6 which pass the restrictions +simple_tune_params = dict() simple_tune_params["x"] = [1, 1.5, 2, 3] simple_tune_params["y"] = [4, 5.5] simple_tune_params["z"] = ["string_1", "string_2"] -restrict = [lambda x, y, z: x != 1.5] +restrict = ["y % x == 1"] simple_tuning_options = Options(dict(restrictions=restrict, tune_params=simple_tune_params)) simple_searchspace = Searchspace(simple_tune_params, restrict, max_threads) +simple_searchspace_bruteforce = Searchspace(simple_tune_params, restrict, max_threads, framework="bruteforce") # 3.1 million combinations, of which 10600 pass the restrictions num_layers = 42 -tune_params = OrderedDict() +tune_params = dict() tune_params["gpu1"] = list(range(num_layers)) tune_params["gpu2"] = list(range(num_layers)) tune_params["gpu3"] = list(range(num_layers)) @@ -37,31 +38,40 @@ # each GPU must have at least one layer and the sum of all layers must not exceed the total number of layers -def min_func(gpu1, gpu2, gpu3, gpu4): +def _min_func(gpu1, gpu2, gpu3, gpu4): return min([gpu1, gpu2, gpu3, gpu4]) >= 1 # test three different types of restrictions: python-constraint, a function and a string -restrict = [ExactSumConstraint(num_layers), FunctionConstraint(min_func)] +restrict = [ExactSumConstraint(num_layers), FunctionConstraint(_min_func)] # create the searchspace object searchspace = Searchspace(tune_params, restrict, max_threads) +searchspace_bruteforce = Searchspace(tune_params, restrict, max_threads, framework="bruteforce") # 74088 combinations intended to test whether sorting works -sort_tune_params = OrderedDict() +sort_tune_params = dict() sort_tune_params["gpu1"] = list(range(num_layers)) sort_tune_params["gpu2"] = list(range(num_layers)) sort_tune_params["gpu3"] = list(range(num_layers)) searchspace_sort = Searchspace(sort_tune_params, [], max_threads) + +def compare_two_searchspace_objects(searchspace_1: Searchspace, searchspace_2: Searchspace): + """Helper test function to assert that two searchspace objects are identical in outcome.""" + assert searchspace_1.size == searchspace_2.size + for dict_config in searchspace_1.get_list_dict().keys(): + assert searchspace_2.is_param_config_valid(dict_config) + + def test_size(): - """test that the searchspace after applying restrictions is the expected size""" - assert simple_searchspace.size == 12 + """Test that the searchspace after applying restrictions is the expected size.""" + assert simple_searchspace.size == 6 assert searchspace.size == 10660 def test_internal_representation(): - """test that the list and dict representations match in size, type and elements""" + """Test that the list and dict representations match in size, type and elements.""" assert searchspace.size == len(searchspace.list) assert searchspace.size == len(searchspace.get_list_dict().keys()) assert isinstance(searchspace.list[0], tuple) @@ -69,9 +79,13 @@ def test_internal_representation(): for index, dict_config in enumerate(searchspace.get_list_dict().keys()): assert dict_config == searchspace.list[index] +def test_against_bruteforce(): + """Tests the default Searchspace framework against bruteforcing the searchspace.""" + compare_two_searchspace_objects(simple_searchspace, simple_searchspace_bruteforce) + compare_two_searchspace_objects(searchspace, searchspace_bruteforce) def test_sort(): - """test that the sort searchspace option works as expected""" + """Test that the sort searchspace option works as expected.""" simple_searchspace_sort = Searchspace( simple_tuning_options.tune_params, simple_tuning_options.restrictions, @@ -79,18 +93,12 @@ def test_sort(): ) expected = [ - (1, 4, "string_1"), - (1, 4, "string_2"), - (1, 5.5, "string_1"), - (1, 5.5, "string_2"), - (2, 4, "string_1"), - (2, 4, "string_2"), - (2, 5.5, "string_1"), - (2, 5.5, "string_2"), + (1.5, 4, "string_1"), + (1.5, 4, "string_2"), + (1.5, 5.5, "string_1"), + (1.5, 5.5, "string_2"), (3, 4, "string_1"), (3, 4, "string_2"), - (3, 5.5, "string_1"), - (3, 5.5, "string_2"), ] # Check if lists match without considering order @@ -109,7 +117,7 @@ def test_sort(): def test_sort_reversed(): - """test that the sort searchspace option with the sort_last_param_first option enabled works as expected""" + """Test that the sort searchspace option with the sort_last_param_first option enabled works as expected.""" simple_searchspace_sort_reversed = Searchspace( simple_tuning_options.tune_params, simple_tuning_options.restrictions, @@ -117,18 +125,12 @@ def test_sort_reversed(): ) expected = [ - (1, 4, "string_1"), - (2, 4, "string_1"), + (1.5, 4, "string_1"), (3, 4, "string_1"), - (1, 5.5, "string_1"), - (2, 5.5, "string_1"), - (3, 5.5, "string_1"), - (1, 4, "string_2"), - (2, 4, "string_2"), + (1.5, 5.5, "string_1"), + (1.5, 4, "string_2"), (3, 4, "string_2"), - (1, 5.5, "string_2"), - (2, 5.5, "string_2"), - (3, 5.5, "string_2"), + (1.5, 5.5, "string_2"), ] # Check if lists match without considering order @@ -147,7 +149,7 @@ def test_sort_reversed(): def test_index_lookup(): - """test that index lookups are consistent for ~1% of the searchspace""" + """Test that index lookups are consistent for ~1% of the searchspace.""" size = searchspace.size for _ in range(ceil(size / 100)): random_index = randrange(0, size) @@ -157,7 +159,7 @@ def test_index_lookup(): def test_param_index_lookup(): - """test the parameter index lookup for a parameter config is as expected""" + """Test the parameter index lookup for a parameter config is as expected.""" first = tuple([1, 4, "string_1"]) last = tuple([3, 5.5, "string_2"]) assert simple_searchspace.get_param_indices(first) == (0, 0, 0) @@ -165,7 +167,7 @@ def test_param_index_lookup(): def test_random_sample(): - """test whether the random sample indices exists and are unique, and if it throws an error for too many samples""" + """Test whether the random sample indices exists and are unique, and if it throws an error for too many samples.""" random_sample_indices = searchspace.get_random_sample_indices(100) assert len(random_sample_indices) == 100 for index in random_sample_indices: @@ -222,65 +224,63 @@ def __test_neighbors(param_config: tuple, expected_neighbors: list, neighbor_met def test_neighbors_hamming(): - """test whether the neighbors with Hamming distance are as expected""" + """Test whether the neighbors with Hamming distance are as expected.""" test_config = tuple([1, 4, "string_1"]) expected_neighbors = [ - (2, 4, "string_1"), - (3, 4, "string_1"), - (1, 5.5, "string_1"), - (1, 4, "string_2"), + (1.5, 4, 'string_1'), + (3, 4, 'string_1'), ] + __test_neighbors(test_config, expected_neighbors, "Hamming") def test_neighbors_strictlyadjacent(): - """test whether the strictly adjacent neighbors are as expected""" + """Test whether the strictly adjacent neighbors are as expected.""" test_config = tuple([1, 4, "string_1"]) expected_neighbors = [ - (1, 5.5, "string_2"), - (1, 5.5, "string_1"), - (1, 4, "string_2"), + (1.5, 4, 'string_1'), + (1.5, 4, 'string_2'), + (1.5, 5.5, 'string_1'), + (1.5, 5.5, 'string_2'), ] __test_neighbors(test_config, expected_neighbors, "strictly-adjacent") def test_neighbors_adjacent(): - """test whether the adjacent neighbors are as expected""" + """Test whether the adjacent neighbors are as expected.""" test_config = tuple([1, 4, "string_1"]) expected_neighbors = [ - (2, 5.5, "string_2"), - (1, 5.5, "string_2"), - (2, 5.5, "string_1"), - (1, 5.5, "string_1"), - (2, 4, "string_2"), - (1, 4, "string_2"), - (2, 4, "string_1"), + (1.5, 4, 'string_1'), + (1.5, 4, 'string_2'), + (1.5, 5.5, 'string_1'), + (1.5, 5.5, 'string_2'), ] __test_neighbors(test_config, expected_neighbors, "adjacent") def test_neighbors_fictious(): - """test whether the neighbors are as expected for a fictious parameter configuration (i.e. not existing in the search space due to restrictions)""" + """Test whether the neighbors are as expected for a fictious parameter configuration (i.e. not existing in the search space due to restrictions).""" test_config = tuple([1.5, 4, "string_1"]) expected_neighbors_hamming = [ - (1, 4, "string_1"), - (2, 4, "string_1"), - (3, 4, "string_1"), + (1.5, 4, 'string_2'), + (1.5, 5.5, 'string_1'), + (3, 4, 'string_1'), ] expected_neighbors_strictlyadjacent = [ - (2, 5.5, "string_2"), - (1, 5.5, "string_2"), - (2, 5.5, "string_1"), - (1, 5.5, "string_1"), - (2, 4, "string_2"), - (1, 4, "string_2"), - (2, 4, "string_1"), - (1, 4, "string_1"), + (1.5, 5.5, 'string_2'), + (1.5, 5.5, 'string_1'), + (1.5, 4, 'string_2') ] - expected_neighbors_adjacent = expected_neighbors_strictlyadjacent + expected_neighbors_adjacent = [ + (1.5, 5.5, 'string_2'), + (1.5, 5.5, 'string_1'), + (1.5, 4, 'string_2'), + (3, 4, 'string_1'), + (3, 4, 'string_2'), + ] __test_neighbors_direct(test_config, expected_neighbors_hamming, "Hamming") __test_neighbors_direct(test_config, expected_neighbors_strictlyadjacent, "strictly-adjacent") @@ -288,7 +288,7 @@ def test_neighbors_fictious(): def test_neighbors_cached(): - """test whether retrieving a set of neighbors twice returns the cached version""" + """Test whether retrieving a set of neighbors twice returns the cached version.""" simple_searchspace_duplicate = Searchspace( simple_tuning_options.tune_params, simple_tuning_options.restrictions, @@ -296,7 +296,7 @@ def test_neighbors_cached(): neighbor_method="Hamming" ) - test_configs = simple_searchspace_duplicate.get_random_sample(10) + test_configs = simple_searchspace_duplicate.get_random_sample(5) for test_config in test_configs: assert not simple_searchspace_duplicate.are_neighbors_indices_cached(test_config) neighbors = simple_searchspace_duplicate.get_neighbors(test_config) @@ -306,13 +306,12 @@ def test_neighbors_cached(): def test_param_neighbors(): - """test whether for a given parameter configuration and index the correct neighboring parameters are returned""" + """Test whether for a given parameter configuration and index the correct neighboring parameters are returned.""" test_config = tuple([1.5, 4, "string_1"]) - expected_neighbors = [[1, 2], [5.5], ["string_2"]] + expected_neighbors = [[3], [5.5], ["string_2"]] for index in range(3): neighbor_params = simple_searchspace.get_param_neighbors(test_config, index, "adjacent", randomize=False) - print(neighbor_params) assert len(neighbor_params) == len(expected_neighbors[index]) for param_index, param in enumerate(neighbor_params): assert param == expected_neighbors[index][param_index] @@ -320,17 +319,14 @@ def test_param_neighbors(): @patch("kernel_tuner.searchspace.choice", lambda x: x[0]) def test_order_param_configs(): - """test whether the ordering of parameter configurations according to parameter index happens as expected""" + """Test whether the ordering of parameter configurations according to parameter index happens as expected.""" test_order = [1, 2, 0] test_config = tuple([1, 4, "string_1"]) expected_order = [ - (2, 5.5, "string_2"), - (2, 4, "string_2"), - (1, 4, "string_2"), - (2, 4, "string_1"), - (2, 5.5, "string_1"), - (1, 5.5, "string_1"), - (1, 5.5, "string_2"), + (1.5, 5.5, 'string_2'), + (1.5, 4, 'string_2'), + (1.5, 4, 'string_1'), + (1.5, 5.5, 'string_1') ] neighbors = simple_searchspace.get_neighbors_no_cache(test_config, "adjacent") @@ -370,6 +366,7 @@ def test_order_param_configs(): # test usecase ordered_neighbors = simple_searchspace.order_param_configs(neighbors, test_order, randomize_in_params=False) for index, expected_param_config in enumerate(expected_order): + assert expected_param_config in ordered_neighbors assert expected_param_config == ordered_neighbors[index] # test randomize in params @@ -379,13 +376,57 @@ def test_order_param_configs(): assert len(ordered_neighbors) == len(expected_order) -def test_max_threads(): +def test_small_searchspace(): + """Test a small real-world searchspace and the usage of the `max_threads` parameter.""" max_threads = 1024 tune_params = dict() - tune_params["block_size_x"] = [512, 1024] - tune_params["block_size_y"] = [1] - searchspace = Searchspace(tune_params, None, max_threads) - - print(searchspace.list) - - assert len(searchspace.list) > 1 + tune_params["block_size_x"] = [1, 2, 4, 8, 16] + [32*i for i in range(1,33)] + tune_params["block_size_y"] = [2**i for i in range(6)] + tune_params["tile_size_x"] = [i for i in range(1,11)] + restrictions = [ + "block_size_x*block_size_y >= 32", + f"block_size_x*block_size_y <= {max_threads}", + ] + searchspace = Searchspace(tune_params, restrictions, max_threads) + searchspace_bruteforce = Searchspace(tune_params, restrictions, max_threads, framework="bruteforce") + compare_two_searchspace_objects(searchspace, searchspace_bruteforce) + +def test_full_searchspace(compare_against_bruteforce=False): + """Tests a full real-world searchspace (expdist). If `compare_against_bruteforce`, the searcspace will be bruteforced to compare against, this can take a long time!.""" + # device characteristics + dev = { + 'device_name': 'NVIDIA A40', + 'max_threads': 1024, + 'max_shared_memory_per_block': 49152, + 'max_shared_memory': 102400 + } + + # tunable parameters and restrictions + tune_params = dict() + tune_params["block_size_x"] = [1, 2, 4, 8, 16] + [32*i for i in range(1,33)] + tune_params["block_size_y"] = [2**i for i in range(6)] + tune_params["tile_size_x"] = [i for i in range(1,11)] + tune_params["tile_size_y"] = [i for i in range(1,11)] + tune_params["temporal_tiling_factor"] = [i for i in range(1,11)] + max_tfactor = max(tune_params["temporal_tiling_factor"]) + tune_params["max_tfactor"] = [max_tfactor] + tune_params["loop_unroll_factor_t"] = [i for i in range(1,max_tfactor+1)] + tune_params["sh_power"] = [0,1] + tune_params["blocks_per_sm"] = [0,1,2,3,4] + + restrictions = [ + "block_size_x*block_size_y >= 32", + "temporal_tiling_factor % loop_unroll_factor_t == 0", + f"block_size_x*block_size_y <= {dev['max_threads']}", + f"(block_size_x*tile_size_x + temporal_tiling_factor * 2) * (block_size_y*tile_size_y + temporal_tiling_factor * 2) * (2+sh_power) * 4 <= {dev['max_shared_memory_per_block']}", + f"blocks_per_sm == 0 or (((block_size_x*tile_size_x + temporal_tiling_factor * 2) * (block_size_y*tile_size_y + temporal_tiling_factor * 2) * (2+sh_power) * 4) * blocks_per_sm <= {dev['max_shared_memory']})" + ] + + # build the searchspace + searchspace = Searchspace(tune_params, restrictions, max_threads=dev['max_threads']) + + if compare_against_bruteforce: + searchspace_bruteforce = Searchspace(tune_params, restrictions, max_threads=dev['max_threads'], framework='bruteforce') + compare_two_searchspace_objects(searchspace, searchspace_bruteforce) + else: + assert searchspace.size == len(searchspace.list) == 349853 diff --git a/test/test_toml_file.py b/test/test_toml_file.py new file mode 100644 index 000000000..29b353ae9 --- /dev/null +++ b/test/test_toml_file.py @@ -0,0 +1,72 @@ +"""Tests for release information.""" + +from pathlib import Path + +import tomli + +package_root = Path(".").parent.parent +pyproject_toml_path = package_root / "pyproject.toml" +assert pyproject_toml_path.exists() +with pyproject_toml_path.open(mode="rb") as fp: + pyproject = tomli.load(fp) + project = pyproject["project"] if "project" in pyproject else pyproject["tool"]["poetry"] + + +def test_read(): + """Test whether the contents have been read correctly and the required keys are in place.""" + assert isinstance(pyproject, dict) + assert "build-system" in pyproject + + +def test_name(): + """Ensure the name is consistent.""" + assert "name" in project + assert project["name"] == "kernel_tuner" + + +def test_versioning(): + """Test whether the versioning is PEP440 compliant.""" + from pep440 import is_canonical + + assert "version" in project + assert is_canonical(project["version"]) + + +def test_authors(): + """Ensure the authors are specified.""" + assert "authors" in project + assert len(project["authors"]) > 0 + + +def test_license(): + """Ensure the license is set and the file exists.""" + assert "license" in project + license = project["license"] + if isinstance(license, dict): + assert "file" in license + license = project["license"]["file"] + assert isinstance(license, str) + assert len(license) > 0 + if license == "LICENSE": + assert Path(package_root / license).exists() + + +def test_readme(): + """Ensure the readme is set and the file exists.""" + assert "readme" in project + readme = project["readme"] + if isinstance(readme, dict): + assert "file" in readme + readme = project["readme"]["file"] + assert isinstance(readme, str) + assert len(readme) > 0 + assert readme[:6] == "README" + assert Path(package_root / readme).exists() + + +def test_project_keys(): + """Check whether the expected keys in [project] or [tool.poetry] are present.""" + assert "description" in project + assert "keywords" in project + assert "classifiers" in project + assert "requires-python" in project or "python" in pyproject["tool"]["poetry"]["dependencies"] diff --git a/test/test_util_functions.py b/test/test_util_functions.py index 378bca229..24249ff16 100644 --- a/test/test_util_functions.py +++ b/test/test_util_functions.py @@ -1,31 +1,27 @@ from __future__ import print_function -from collections import OrderedDict -import os import json +import os import warnings import numpy as np import pytest -from .context import skip_if_no_pycuda, skip_if_no_cuda, skip_if_no_opencl - -from kernel_tuner.interface import Options -import kernel_tuner.core as core -import kernel_tuner.backends.pycuda as pycuda import kernel_tuner.backends.nvcuda as nvcuda import kernel_tuner.backends.opencl as opencl +import kernel_tuner.backends.pycuda as pycuda +import kernel_tuner.core as core +from kernel_tuner.interface import Options from kernel_tuner.util import * +from .context import skip_if_no_cuda, skip_if_no_opencl, skip_if_no_pycuda + block_size_names = ["block_size_x", "block_size_y", "block_size_z"] def test_get_grid_dimensions1(): problem_size = (1024, 1024, 1) - params = { - "block_x": 41, - "block_y": 37 - } + params = {"block_x": 41, "block_y": 37} grid_div = (["block_x"], ["block_y"], None) @@ -51,7 +47,9 @@ def test_get_grid_dimensions1(): assert grid[1] == 28 assert grid[2] == 1 - grid = get_grid_dimensions(problem_size, params, (None, lambda p: p["block_x"], lambda p: p["block_y"] * p["block_x"]), block_size_names) + grid = get_grid_dimensions( + problem_size, params, (None, lambda p: p["block_x"], lambda p: p["block_y"] * p["block_x"]), block_size_names + ) assert grid[0] == 1024 assert grid[1] == 25 @@ -60,10 +58,7 @@ def test_get_grid_dimensions1(): def test_get_grid_dimensions2(): problem_size = (1024, 1024, 1) - params = { - "block_x": 41, - "block_y": 37 - } + params = {"block_x": 41, "block_y": 37} grid_div_x = ["block_x*8"] grid_div_y = ["(block_y+2)/8"] @@ -76,10 +71,7 @@ def test_get_grid_dimensions2(): def test_get_grid_dimensions3(): problem_size = (1024, 1024, 1) - params = { - "block_x": 41, - "block_y": 37 - } + params = {"block_x": 41, "block_y": 37} grid_div_x = ["block_x", "block_y"] grid_div_y = ["(block_y+2)/8"] @@ -98,10 +90,7 @@ def assert_grid_dimensions(problem_size): def test_get_problem_size1(): problem_size = ("num_blocks_x", "num_blocks_y*3") - params = { - "num_blocks_x": 71, - "num_blocks_y": 57 - } + params = {"num_blocks_x": 71, "num_blocks_y": 57} answer = get_problem_size(problem_size, params) assert answer[0] == 71 @@ -111,9 +100,7 @@ def test_get_problem_size1(): def test_get_problem_size2(): problem_size = "num_blocks_x" - params = { - "num_blocks_x": 71 - } + params = {"num_blocks_x": 71} answer = get_problem_size(problem_size, params) assert answer[0] == 71 @@ -124,16 +111,12 @@ def test_get_problem_size2(): def test_get_problem_size3(): with pytest.raises(TypeError): problem_size = (3.8, "num_blocks_y*3") - params = { - "num_blocks_y": 57 - } + params = {"num_blocks_y": 57} get_problem_size(problem_size, params) def test_get_problem_size4(): - params = { - "num_blocks_x": 71 - } + params = {"num_blocks_x": 71} answer = get_problem_size(lambda p: (p["num_blocks_x"], 1, 13), params) assert answer[0] == 71 @@ -142,11 +125,7 @@ def test_get_problem_size4(): def test_get_thread_block_dimensions(): - - params = { - "block_size_x": 123, - "block_size_y": 257 - } + params = {"block_size_x": 123, "block_size_y": 257} threads = get_thread_block_dimensions(params) assert len(threads) == 3 @@ -167,29 +146,24 @@ def test_prepare_kernel_string(): params["is"] = 8 _, output = prepare_kernel_string("this", kernel, params, grid, threads, block_size_names, "", None) - expected = "#define grid_size_x 3\n" \ - "#define grid_size_y 7\n" \ - "#define block_size_x 1\n" \ - "#define block_size_y 2\n" \ - "#define block_size_z 3\n" \ - "#define is 8\n" \ - "#define kernel_tuner 1\n" \ - "#line 1\n" \ - "this is a weird kernel" + expected = ( + "#define grid_size_x 3\n" + "#define grid_size_y 7\n" + "#define block_size_x 1\n" + "#define block_size_y 2\n" + "#define block_size_z 3\n" + "#define is 8\n" + "#define kernel_tuner 1\n" + "#line 1\n" + "this is a weird kernel" + ) assert output == expected # Check custom defines - defines = OrderedDict( - foo=1, - bar="custom", - baz=lambda config: config["is"] * 5) + defines = dict(foo=1, bar="custom", baz=lambda config: config["is"] * 5) _, output = prepare_kernel_string("this", kernel, params, grid, threads, block_size_names, "", defines) - expected = "#define foo 1\n" \ - "#define bar custom\n" \ - "#define baz 40\n" \ - "#line 1\n" \ - "this is a weird kernel" + expected = "#define foo 1\n" "#define bar custom\n" "#define baz 40\n" "#line 1\n" "this is a weird kernel" assert output == expected # Throw exception on invalid name (for instance, a space in the name) @@ -199,7 +173,6 @@ def test_prepare_kernel_string(): def test_prepare_kernel_string_partial_loop_unrolling(): - kernel = """this is a weird kernel(what * language, is this, anyway* C) { #pragma unroll loop_unroll_factor_monkey for monkey in the forest { @@ -216,9 +189,8 @@ def test_prepare_kernel_string_partial_loop_unrolling(): params["loop_unroll_factor_monkey"] = 0 _, output = prepare_kernel_string("this", kernel, params, grid, threads, block_size_names, "CUDA", None) - assert not "constexpr int loop_unroll_factor_monkey" in output - assert not "#pragma unroll loop_unroll_factor_monkey" in output - + assert "constexpr int loop_unroll_factor_monkey" not in output + assert "#pragma unroll loop_unroll_factor_monkey" not in output def test_replace_param_occurrences(): @@ -240,14 +212,15 @@ def test_replace_param_occurrences(): def test_check_restrictions(): - params = { - "a": 7, - "b": 4, - "c": 3 - } + params = {"a": 7, "b": 4, "c": 3} print(params.values()) print(params.keys()) - restrictions = [["a==b+c"], ["a==b+c", "b==b", "a-b==c"], ["a==b+c", "b!=b", "a-b==c"], lambda p: p["a"] == p["b"] + p["c"]] + restrictions = [ + ["a==b+c"], + ["a==b+c", "b==b", "a-b==c"], + ["a==b+c", "b!=b", "a-b==c"], + lambda p: p["a"] == p["b"] + p["c"], + ] expected = [True, True, False, True] # test the call returns expected for r, e in zip(restrictions, expected): @@ -301,7 +274,7 @@ def test_get_device_interface3(): def test_get_device_interface4(): with pytest.raises(Exception): lang = "blabla" - dev = core.DeviceInterface(lang) + core.DeviceInterface(lang) def assert_user_warning(f, args, substring=None): @@ -327,7 +300,7 @@ def test_check_argument_list1(): numbers[get_global_id(0)] = numbers[get_global_id(0)] * number; } """ - args = [np.int32(5), 'blah', np.array([1, 2, 3])] + args = [np.int32(5), "blah", np.array([1, 2, 3])] try: check_argument_list(kernel_name, kernel_string, args) print("Expected a TypeError to be raised") @@ -445,7 +418,7 @@ def test_check_tune_params_list2(): def test_check_tune_params_list3(): # test that exception is raised when tunable parameter is passed that needs an NVMLObserver for param in ["nvml_pwr_limit", "nvml_gr_clock", "nvml_mem_clock"]: - tune_params = {param:[]} + tune_params = {param: []} with pytest.raises(ValueError, match=r".*NVMLObserver.*"): check_tune_params_list(tune_params, None) with pytest.raises(ValueError, match=r".*NVMLObserver.*"): @@ -453,7 +426,6 @@ def test_check_tune_params_list3(): def test_check_block_size_params_names_list(): - def test_warnings(function, args, number, warning_type): with warnings.catch_warnings(record=True) as w: # Cause all warnings to always be triggered. @@ -491,9 +463,7 @@ def test_get_kernel_string_func(): def gen_kernel(params): return "__global__ void kernel_name() { %s }" % params["block_size_x"] - params = { - "block_size_x": "//do that kernel thing!" - } + params = {"block_size_x": "//do that kernel thing!"} expected = "__global__ void kernel_name() { //do that kernel thing! }" answer = get_kernel_string(gen_kernel, params) assert answer == expected @@ -523,7 +493,7 @@ def test_read_write_file(): my_string = "this is the test string" try: write_file(filename, my_string) - with open(filename, 'r') as f: + with open(filename, "r") as f: answer = f.read() assert my_string == answer answer2 = read_file(filename) @@ -556,7 +526,6 @@ def verify2(answer, result_host, atol): def test_process_cache(): - def assert_open_cachefile_is_correctly_parsed(cache): with open(cache, "r") as cachefile: filestr = cachefile.read() @@ -587,10 +556,7 @@ def assert_open_cachefile_is_correctly_parsed(cache): assert len(tuning_options.cache) == 0 # store one entry in the cache - params = { - "x": 4, - "time": np.float32(0.1234) - } + params = {"x": 4, "time": np.float32(0.1234)} store_cache("4", params, tuning_options) assert len(tuning_options.cache) == 1 @@ -632,11 +598,8 @@ def assert_open_cachefile_is_correctly_parsed(cache): def test_process_metrics(): - params = { - "x": 15, - "b": 12 - } - metrics = OrderedDict() + params = {"x": 15, "b": 12} + metrics = dict() metrics["y"] = lambda p: p["x"] # test if lambda function is correctly evaluated @@ -644,54 +607,78 @@ def test_process_metrics(): assert params["y"] == params["x"] # test if we can do the same with a string - params = { - "x": 15, - "b": 12 - } + params = {"x": 15, "b": 12} metrics["y"] = "x" params = process_metrics(params, metrics) assert params["y"] == params["x"] # test if composability works correctly - params = { - "x": 15, - "b": 12 - } - metrics = OrderedDict() + params = {"x": 15, "b": 12} + metrics = dict() metrics["y"] = "x" metrics["z"] = "y" params = process_metrics(params, metrics) assert params["z"] == params["x"] - # test ValueError is raised when metrics is not an OrderedDict + # test ValueError is raised when metrics is not a dictionary with pytest.raises(ValueError): - params = process_metrics(params, {}) + params = process_metrics(params, list()) + + # # test ValueError is raised when b already exists in params + # params = {"x": 15, "b": 12} + # metrics = dict() + # metrics["b"] = "x" + # params = process_metrics(params, metrics) + # assert params["b"] == 15 # test if a metric overrides any existing metrics params = { "x": 15, "b": 12 } - metrics = OrderedDict() + metrics = dict() metrics["b"] = "x" params = process_metrics(params, metrics) assert params["b"] == 15 def test_parse_restrictions(): - tune_params = {"block_size_x": [50, 100], "use_padding": [0, 1]} - restrict = ["block_size_x != 320"] - parsed = parse_restrictions(restrict, tune_params) - expected = '(params["block_size_x"] != 320)' - - assert expected in parsed - - # test again but with an 'or' in the expression - restrict.append("use_padding == 0 or block_size_x % 32 != 0") - parsed = parse_restrictions(restrict, tune_params) - expected = '(params["block_size_x"] != 320) and (params["use_padding"] == 0 or params["block_size_x"] % 32 != 0)' - - assert expected in parsed - + restrictions = ["block_size_x != 320", "use_padding == 0 or block_size_x % 32 != 0", "50 <= block_size_x * use_padding < 100"] + + # test the monolithic parsed function + parsed = parse_restrictions(restrict, tune_params, monolithic=True)[0] + expected = "params[params_index['block_size_x']] != 320" + assert expected in parsed[0] + + # test the split parsed function + parsed_multi = parse_restrictions(restrictions, tune_params, try_to_constraint=False) + assert isinstance(parsed_multi, list) and isinstance(parsed_multi[0], tuple) + assert len(parsed_multi) == 3 + parsed, params = parsed_multi[0] + assert restrictions[0] in parsed + assert params == ["block_size_x"] + parsed, params = parsed_multi[1] + assert restrictions[1] in parsed + assert all(param in tune_params for param in params) + parsed, params = parsed_multi[2] + assert restrictions[2] in parsed + assert all(param in tune_params for param in params) + + # test the conversion to constraints + parsed_multi_constraints = parse_restrictions(restrictions, tune_params, try_to_constraint=True) + assert isinstance(parsed_multi_constraints, list) and isinstance(parsed_multi_constraints[0], tuple) + assert len(parsed_multi_constraints) == 4 + parsed, params = parsed_multi_constraints[0] + assert isinstance(parsed, str) + assert params == ["block_size_x"] + parsed, params = parsed_multi_constraints[1] + assert isinstance(parsed, str) + assert all(param in tune_params for param in params) + parsed, params = parsed_multi_constraints[2] + assert isinstance(parsed, MinProdConstraint) + assert all(param in tune_params for param in params) + parsed, params = parsed_multi_constraints[3] + assert isinstance(parsed, MaxProdConstraint) + assert all(param in tune_params for param in params)