Skip to content

Commit

Permalink
Merge pull request #348 from pyiron/example
Browse files Browse the repository at this point in the history
Update examples
  • Loading branch information
jan-janssen authored Sep 28, 2024
2 parents 4d3d3b7 + 9cb4b1a commit b24d90c
Show file tree
Hide file tree
Showing 7 changed files with 298 additions and 28 deletions.
9 changes: 7 additions & 2 deletions .github/workflows/notebooks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,13 @@ jobs:
pip install . --no-deps --no-build-isolation
mkdir config
cp -r tests/config/flux config
- name: Notebooks
- name: Notebooks with config
shell: bash -l {0}
run: >
flux start
papermill notebooks/example.ipynb example-out.ipynb -k "python3"
papermill notebooks/example_config.ipynb example-config-out.ipynb -k "python3"
- name: Notebooks dynamic
shell: bash -l {0}
run: >
flux start
papermill notebooks/example_queue_type.ipynb example-queue-type-out.ipynb -k "python3"
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[![Unittests](https://github.com/pyiron/pysqa/actions/workflows/unittest.yml/badge.svg)](https://github.com/pyiron/pysqa/actions/workflows/unittest.yml)
[![Documentation Status](https://readthedocs.org/projects/pysqa/badge/?version=latest)](https://pysqa.readthedocs.io/en/latest/?badge=latest)
[![Coverage Status](https://coveralls.io/repos/github/pyiron/pysqa/badge.svg?branch=main)](https://coveralls.io/github/pyiron/pysqa?branch=main)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/pyiron/pysqa/HEAD?labpath=example.ipynb)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/pyiron/pysqa/HEAD?labpath=example_config.ipynb)

High-performance computing (HPC) does not have to be hard. In this context the aim of the Python Simple Queuing System
Adapter (`pysqa`) is to simplify the submission of tasks from python to HPC clusters as easy as starting another
Expand Down Expand Up @@ -57,11 +57,15 @@ from within `pysqa`, which are represented to the user as a single resource.
* [SGE](https://pysqa.readthedocs.io/en/latest/queue.html#sge)
* [SLURM](https://pysqa.readthedocs.io/en/latest/queue.html#slurm)
* [TORQUE](https://pysqa.readthedocs.io/en/latest/queue.html#torque)
* [Python Interface](https://pysqa.readthedocs.io/en/latest/example.html)
* [List available queues](https://pysqa.readthedocs.io/en/latest/example.html#list-available-queues)
* [Submit job to queue](https://pysqa.readthedocs.io/en/latest/example.html#submit-job-to-queue)
* [Show jobs in queue](https://pysqa.readthedocs.io/en/latest/example.html#show-jobs-in-queue)
* [Delete job from queue](https://pysqa.readthedocs.io/en/latest/example.html#delete-job-from-queue)
* [Python Interface Dynamic](https://pysqa.readthedocs.io/en/latest/example_queue_type.html)
* [Submit job to queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#submit-job-to-queue)
* [Show jobs in queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#show-jobs-in-queue)
* [Delete job from queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#delete-job-from-queue)
* [Python Interface Config](https://pysqa.readthedocs.io/en/latest/example_config.html)
* [List available queues](https://pysqa.readthedocs.io/en/latest/example_config.html#list-available-queues)
* [Submit job to queue](https://pysqa.readthedocs.io/en/latest/example_config.html#submit-job-to-queue)
* [Show jobs in queue](https://pysqa.readthedocs.io/en/latest/example_config.html#show-jobs-in-queue)
* [Delete job from queue](https://pysqa.readthedocs.io/en/latest/example_config.html#delete-job-from-queue)
* [Command Line Interface](https://pysqa.readthedocs.io/en/latest/command.html)
* [Submit job](https://pysqa.readthedocs.io/en/latest/command.html#submit-job)
* [Enable reservation](https://pysqa.readthedocs.io/en/latest/command.html#enable-reservation)
Expand Down
3 changes: 2 additions & 1 deletion docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ root: README
chapters:
- file: installation.md
- file: queue.md
- file: example.ipynb
- file: example_queue_type.ipynb
- file: example_config.ipynb
- file: command.md
- file: advanced.md
- file: debug.md
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ On `pypi` the `pysqa` package exists in three different versions:
* `pip install pysaq` - base version - with minimal requirements only depends on `jinja2`, `pandas` and `pyyaml`.
* `pip install pysaq[sge]` - sun grid engine (SGE) version - in addition to the base dependencies this installs
`defusedxml` which is required to parse the `xml` files from `qstat`.
* `pip install pysaq[remote]` - remote version - in addition to the base dependencies this installs `paramiko` and
* `pip install pysaq[remote]` - remote version - in addition, to the base dependencies this installs `paramiko` and
`tqdm`, to connect to remote HPC clusters using SSH and report the progress of the data transfer visually.

## conda-based installation
Expand Down
33 changes: 21 additions & 12 deletions docs/queue.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# Queuing Systems
`pysqa` is based on the idea of reusable templates. These templates are defined in the `jinja2` templating language. By
default `pysqa` expects to find these templates in `~/.queues`. Still it is also possible to store them in a different
directory.
The Python simple queuing system adapter `pysqa` is based on the idea of reusable templates. These templates are
defined in the `jinja2` templating language. By default `pysqa` expects to find these templates in the configuration
directory which is specified with the `directory` parameter. Alternatively, they can be defined dynamically by
specifying the queuing system type with the `queue_type` parameter.

In this directory `pysqa` expects to find one queue configuration and one jinja template per queue. The `queue.yaml`
file which defines the available queues and their restrictions in terms of minimum and maximum number of CPU cores,
required memory or run time. In addition, this file defines the type of the queuing system and the default queue.
When using the configuration directory, `pysqa` expects to find one queue configuration and one jinja template per
queue. The `queue.yaml` file which defines the available queues and their restrictions in terms of minimum and maximum
number of CPU cores, required memory or run time. In addition, this file defines the type of the queuing system and the
default queue.

A typical `queue.yaml` file looks like this:
```
Expand Down Expand Up @@ -55,7 +57,8 @@ The queue named `flux` is defined based on a submission script template named `f
{{command}}
```
In this case only the number of cores `cores`, the name of the job `job_name` , the maximum run time of the job
`run_time_max` and the command `command` are communicated.
`run_time_max` and the command `command` are communicated. The same template is stored in the `pysqa` package and can be
imported using `from pysqa.wrapper.flux import template`. So the flux interface can be enabled by setting `queue_type="flux"`.

## LFS
For the load sharing facility framework from IBM the `queue.yaml` file defines the `queue_type` as `LSF`:
Expand Down Expand Up @@ -86,7 +89,8 @@ The queue named `lsf` is defined based on a submission script template named `ls
In this case the name of the job `job_name`, the number of cores `cores,` the working directory of the job
`working_directory` and the command that is executed `command` are defined as mendatory inputs. Beyond these two
optional inputs can be defined, namely the maximum run time for the job `run_time_max` and the maximum memory used by
the job `memory_max`.
the job `memory_max`. The same template is stored in the `pysqa` package and can be imported using
`from pysqa.wrapper.lsf import template`. So the flux interface can be enabled by setting `queue_type="lsf"`.

## MOAB
For the Maui Cluster Scheduler the `queue.yaml` file defines the `queue_type` as `MOAB`:
Expand All @@ -102,7 +106,9 @@ The queue named `moab` is defined based on a submission script template named `m
{{command}}
```
Currently, no template for the Maui Cluster Scheduler is available.
Currently, no template for the Maui Cluster Scheduler is available. The same template is stored in the `pysqa` package
and can be imported using `from pysqa.wrapper.moab import template`. So the flux interface can be enabled by setting
`queue_type="moab"`.

## SGE
For the sun grid engine (SGE) the `queue.yaml` file defines the `queue_type` as `SGE`:
Expand Down Expand Up @@ -134,7 +140,8 @@ The queue named `sge` is defined based on a submission script template named `sg
In this case the name of the job `job_name`, the number of cores `cores,` the working directory of the job
`working_directory` and the command that is executed `command` are defined as mendatory inputs. Beyond these two
optional inputs can be defined, namely the maximum run time for the job `run_time_max` and the maximum memory used by
the job `memory_max`.
the job `memory_max`. The same template is stored in the `pysqa` package and can be imported using
`from pysqa.wrapper.sge import template`. So the flux interface can be enabled by setting `queue_type="sge"`.

## SLURM
For the Simple Linux Utility for Resource Management (SLURM) the `queue.yaml` file defines the `queue_type` as `SLURM`:
Expand Down Expand Up @@ -165,7 +172,8 @@ The queue named `slurm` is defined based on a submission script template named `
In this case the name of the job `job_name`, the number of cores `cores,` the working directory of the job
`working_directory` and the command that is executed `command` are defined as mendatory inputs. Beyond these two
optional inputs can be defined, namely the maximum run time for the job `run_time_max` and the maximum memory used by
the job `memory_max`.
the job `memory_max`. The same template is stored in the `pysqa` package and can be imported using
`from pysqa.wrapper.sge import template`. So the flux interface can be enabled by setting `queue_type="sge"`.

## TORQUE
For the Terascale Open-source Resource and Queue Manager (TORQUE) the `queue.yaml` file defines the `queue_type` as
Expand Down Expand Up @@ -199,4 +207,5 @@ The queue named `torque` is defined based on a submission script template named
In this case the name of the job `job_name`, the number of cores `cores,` the working directory of the job
`working_directory` and the command that is executed `command` are defined as mendatory inputs. Beyond these two
optional inputs can be defined, namely the maximum run time for the job `run_time_max` and the maximum memory used by
the job `memory_max`.
the job `memory_max`. The same template is stored in the `pysqa` package and can be imported using
`from pysqa.wrapper.slurm import template`. So the flux interface can be enabled by setting `queue_type="slurm"`.
27 changes: 21 additions & 6 deletions notebooks/example.ipynb → notebooks/example_config.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,10 @@
"cell_type": "markdown",
"id": "097a5f9f-69a2-42ae-a565-e3cdb17da461",
"metadata": {},
"source": "# Python Interface \nThe `pysqa` package primarily defines one class, that is the `QueueAdapter`. It loads the configuration from a configuration directory, initializes the corrsponding adapter for the specific queuing system and provides a high level interface for users to interact with the queuing system. The `QueueAdapter` can be imported using:"
"source": [
"# Python Interface Config\n",
"The `pysqa` package primarily defines one class, that is the `QueueAdapter`. It loads the configuration from a configuration directory, initializes the corrsponding adapter for the specific queuing system and provides a high level interface for users to interact with the queuing system. The `QueueAdapter` can be imported using:"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -92,7 +95,10 @@
"cell_type": "markdown",
"id": "451180a6-bc70-4053-a67b-57357522da0f",
"metadata": {},
"source": "# List available queues \nList available queues as list of queue names: "
"source": [
"## List available queues\n",
"List available queues as list of queue names: "
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -149,7 +155,10 @@
"cell_type": "markdown",
"id": "42a53d33-2916-461f-86be-3edbe01d3cc7",
"metadata": {},
"source": "# Submit job to queue\nSubmit a job to the queue - if no queue is specified it is submitted to the default queue defined in the queue configuration:"
"source": [
"## Submit job to queue\n",
"Submit a job to the queue - if no queue is specified it is submitted to the default queue defined in the queue configuration:"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -192,7 +201,10 @@
"cell_type": "markdown",
"id": "672854fd-3aaa-4287-b29c-d5370e4adc14",
"metadata": {},
"source": "# Show jobs in queue \nGet status of all jobs currently handled by the queuing system:"
"source": [
"## Show jobs in queue\n",
"Get status of all jobs currently handled by the queuing system:"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -275,7 +287,10 @@
"cell_type": "markdown",
"id": "f89528d3-a3f5-4adb-9f74-7f70270aec12",
"metadata": {},
"source": "# Delete job from queue \nDelete a job with the queue id `queue_id` from the queuing system:"
"source": [
"## Delete job from queue\n",
"Delete a job with the queue id `queue_id` from the queuing system:"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -321,4 +336,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
Loading

0 comments on commit b24d90c

Please sign in to comment.