Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependencies #337

Merged
merged 13 commits into from
Feb 9, 2024
2 changes: 1 addition & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ jobs:
- name: Patch Environment File
if: matrix.os == 'windows-latest'
run: |
sed -i 's/climpred >=2.2.0/xesmf/' environment.yml
sed -i 's/climpred >=2.4.0/xesmf/' environment.yml
- name: Setup Conda (Micromamba) with Python${{ matrix.python-version }}
uses: mamba-org/setup-micromamba@v1
with:
Expand Down
14 changes: 7 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,12 @@ repos:
hooks:
- id: toml-sort-fix
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.12.1
rev: 24.1.1
hooks:
- id: black
exclude: ^docs/
- repo: https://github.com/pycqa/flake8
rev: 6.1.0
rev: 7.0.0
hooks:
- id: flake8
args: [ '--config=setup.cfg' ]
Expand All @@ -52,11 +52,11 @@ repos:
args: [ '--py38-plus' ]
additional_dependencies: [ 'pyupgrade==3.15.0' ]
- id: nbqa-black
additional_dependencies: [ 'black==23.12.1' ]
additional_dependencies: [ 'black==24.1.1' ]
- id: nbqa-isort
additional_dependencies: [ 'isort==5.13.2' ]
- repo: https://github.com/kynan/nbstripout
rev: 0.6.1
rev: 0.7.1
hooks:
- id: nbstripout
files: ".ipynb"
Expand All @@ -70,14 +70,14 @@ repos:
rev: v0.3.9
hooks:
- id: blackdoc
additional_dependencies: [ 'black==23.12.1' ]
additional_dependencies: [ 'black==24.1.1' ]
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.33.0
rev: v1.34.0
hooks:
- id: yamllint
args: [ '--config-file=.yamllint.yaml' ]
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.27.3
rev: 0.28.0
hooks:
- id: check-github-workflows
- id: check-readthedocs
Expand Down
23 changes: 14 additions & 9 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,13 +46,13 @@
"sphinx_click",
"sphinx_codeautolink",
"sphinx_copybutton",
# "sphinxcontrib.autodoc_pydantic",
# "sphinxcontrib.autodoc_pydantic", # FIXME: Does not seem to be compatible with RavenPy codebase.
]

linkcheck_ignore = [
r"https://www.ouranos.ca", # bad ssl certificate
# Added on 2023-03-06: Wiley does not allow linkcheck requests (error 403)
r"https://doi.org/10.1029/2020WR029229"
r"https://doi.org/10.1029/2020WR029229",
]

nbsphinx_custom_formats = {
Expand All @@ -68,7 +68,7 @@
nb_execution_excludepatterns = [
"configuration.md",
"notebooks/*.ipynb",
"notebooks/paper/*.ipynb"
"notebooks/paper/*.ipynb",
]

# nbsphinx_execute = "auto"
Expand All @@ -94,6 +94,7 @@
"clisops",
"fiona",
"gdal",
"h5netcdf",
"netCDF4",
"osgeo",
"geopandas",
Expand Down Expand Up @@ -150,8 +151,15 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", ".jupyter_cache", "jupyter_execute", "notebooks/paper",
"notebooks/HydroShare_integration.ipynb", ]
exclude_patterns = [
"_build",
"Thumbs.db",
".DS_Store",
".jupyter_cache",
"jupyter_execute",
"notebooks/paper",
"notebooks/HydroShare_integration.ipynb",
]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
Expand All @@ -160,10 +168,7 @@
todo_include_todos = False

# Suppress "WARNING: unknown mimetype for ..." when building EPUB.
suppress_warnings = [
"epub.unknown_project_files",
"mystnb.unknown_mime_type"
]
suppress_warnings = ["epub.unknown_project_files", "mystnb.unknown_mime_type"]

# -- Options for HTML output -------------------------------------------

Expand Down
30 changes: 15 additions & 15 deletions docs/notebooks/06_Raven_calibration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"id": "4a5f03fb",
"id": "0",
"metadata": {},
"source": [
"# 06 - Calibration of a Raven hydrological model"
]
},
{
"cell_type": "markdown",
"id": "d1ce69fb",
"id": "1",
"metadata": {},
"source": [
"## Calibration of a Raven model\n",
Expand All @@ -23,7 +23,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "565a7b6c",
"id": "2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
"id": "cbfe7818",
"id": "3",
"metadata": {},
"source": [
"## Preparing the model to be calibrated on a given watershed\n",
Expand All @@ -48,7 +48,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "bf6a2500",
"id": "4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
"id": "e5611922",
"id": "5",
"metadata": {},
"source": [
"The process is very similar to setting up a hydrological model. We first need to create the model with its configuration. We must provide the same information as before, except for the model parameters since those need to be calibrated."
Expand All @@ -74,7 +74,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2105f6ea",
"id": "6",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
"id": "40c8371c",
"id": "7",
"metadata": {},
"source": [
"## Spotpy Calibration\n",
Expand All @@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0c089720",
"id": "8",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -160,7 +160,7 @@
},
{
"cell_type": "markdown",
"id": "2b6351b2",
"id": "9",
"metadata": {},
"source": [
"Now that the model is setup and configured and that `SpotSetup` object exists, we need to create a sampler from `spotpy` module which will optimize the hydrological model paramaters. You can see that we are using the DDS algorithm to optimize the parameters:\n",
Expand All @@ -175,7 +175,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "77d168a1",
"id": "10",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
"id": "f789c674",
"id": "11",
"metadata": {},
"source": [
"## Analysing the calibration results\n",
Expand All @@ -207,7 +207,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ae1fc2c4",
"id": "12",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -234,7 +234,7 @@
},
{
"cell_type": "markdown",
"id": "d22ea8c6-173d-44e9-82c2-cf87a0227180",
"id": "13",
"metadata": {},
"source": [
"## Next steps\n",
Expand All @@ -244,7 +244,7 @@
},
{
"cell_type": "markdown",
"id": "c4655f07",
"id": "14",
"metadata": {},
"source": [
"## List of Model-Boundaries\n",
Expand Down
26 changes: 13 additions & 13 deletions docs/notebooks/Assess_probabilistic_flood_risk.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "chinese-dealer",
"id": "0",
"metadata": {},
"source": [
"# Probabilistic flood risk assessment\n",
Expand All @@ -13,7 +13,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "79d923bd-4ce5-41f1-a441-f439b23fc388",
"id": "1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -27,7 +27,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "descending-bedroom",
"id": "2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -43,7 +43,7 @@
},
{
"cell_type": "markdown",
"id": "genuine-dodge",
"id": "3",
"metadata": {},
"source": [
"Perform the time series analysis on observed data for the catchment using the frequency analysis WPS capabilities."
Expand All @@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "quiet-queens",
"id": "4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -71,7 +71,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "appointed-toner",
"id": "5",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
"id": "explicit-accent",
"id": "6",
"metadata": {},
"source": [
"## Probabilistic forecast\n",
Expand All @@ -107,7 +107,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "excessive-apparatus",
"id": "7",
"metadata": {
"pycharm": {
"is_executing": true
Expand Down Expand Up @@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
"id": "8308cde3",
"id": "8",
"metadata": {},
"source": [
"Now that the configuration is ready, launch the ESP forecasting tool to generate an ensemble hydrological forecast:"
Expand All @@ -181,7 +181,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0c0b126a",
"id": "9",
"metadata": {
"pycharm": {
"is_executing": true
Expand All @@ -201,7 +201,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "embedded-patrol",
"id": "10",
"metadata": {
"pycharm": {
"is_executing": true
Expand All @@ -220,7 +220,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "british-bunch",
"id": "11",
"metadata": {
"pycharm": {
"is_executing": true
Expand All @@ -247,7 +247,7 @@
},
{
"cell_type": "markdown",
"id": "surface-constitutional",
"id": "12",
"metadata": {},
"source": [
"### Results analysis\n",
Expand Down
Loading
Loading