Skip to content

Commit

Permalink
Merge pull request #337 from pyiron/docs
Browse files Browse the repository at this point in the history
Reformat docs
  • Loading branch information
jan-janssen authored Sep 28, 2024
2 parents 3b454dc + 0eaa898 commit ad35248
Show file tree
Hide file tree
Showing 5 changed files with 98 additions and 38 deletions.
20 changes: 14 additions & 6 deletions docs/advanced.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Advanced Configuration
Initially `pysqa` was only designed to interact with the local queuing systems of an HPC cluster. This functionality has recently been extended to support remote HPC clusters in addition to local HPC clusters. These two developments, the support for remote HPC clusters and the support for multiple clusters in `pysqa` are discussed in the following. Both of these features are under active development so this part of the interface might change more frequently than the rest.
Initially `pysqa` was only designed to interact with the local queuing systems of an HPC cluster. This functionality has
recently been extended to support remote HPC clusters in addition to local HPC clusters. These two developments, the
support for remote HPC clusters and the support for multiple clusters in `pysqa` are discussed in the following. Both of
these features are under active development so this part of the interface might change more frequently than the rest.

## Remote HPC Configuration
Remote clusters can be defined in the `queue.yaml` file by setting the `queue_type` to `REMOTE`:
Expand Down Expand Up @@ -30,23 +33,27 @@ In addition to `queue_type`, `queue_primary` and `queues` parameters, this also

And optional keywords:

* `ssh_delete_file_on_remote` specify whether files on the remote HPC should be deleted after they are transferred back to the local system - defaults to `True`
* `ssh_delete_file_on_remote` specify whether files on the remote HPC should be deleted after they are transferred back
to the local system - defaults to `True`
* `ssh_port` the port used for the SSH connection on the remote HPC cluster - defaults to `22`

A definition of the `queues` in the local system is required to enable the parameter checks locally. Still it is sufficient to only store the individual submission script templates only on the remote HPC.
A definition of the `queues` in the local system is required to enable the parameter checks locally. Still it is
sufficient to only store the individual submission script templates only on the remote HPC.

## Access to Multiple HPCs
To support multiple remote HPC clusters additional functionality was added to `pysqa`.

Namely, a `clusters.yaml` file can be defined in the configuration directory, which defines multiple `queue.yaml` files for different clusters:
Namely, a `clusters.yaml` file can be defined in the configuration directory, which defines multiple `queue.yaml` files
for different clusters:
```
cluster_primary: local_slurm
cluster: {
local_slurm: local_slurm_queues.yaml,
remote_slurm: remote_queues.yaml
}
```
These `queue.yaml` files can again include all the functionality defined previously, including the configuration for remote connection using SSH.
These `queue.yaml` files can again include all the functionality defined previously, including the configuration for
remote connection using SSH.

Furthermore, the `QueueAdapter` class was extended with the following two functions:
```
Expand All @@ -56,4 +63,5 @@ To list the available clusters in the configuration and:
```
qa.switch_cluster(cluster_name)
```
To switch from one cluster to another, with the `cluster_name` providing the name of the cluster like `local_slurm` and `remote_slurm` in the configuration above.
To switch from one cluster to another, with the `cluster_name` providing the name of the cluster like `local_slurm` and
`remote_slurm` in the configuration above.
32 changes: 23 additions & 9 deletions docs/command.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,17 @@
# Command Line Interface
The command line interface implements a subset of the functionality of the python interface. While it can be used locally to check the status of your calculation, the primary use case is accessing the `pysqa` installation on a remote HPC cluster from your local `pysqa` installation. Still here the local execution of the commands is discussed.
The command line interface implements a subset of the functionality of the python interface. While it can be used
locally to check the status of your calculation, the primary use case is accessing the `pysqa` installation on a remote
HPC cluster from your local `pysqa` installation. Still here the local execution of the commands is discussed.

The available options are the submission of new jobs to the queuing system using the submit option `--submit`, enabling reservation for a job already submitted using the `--reservation` option, listing jobs on the queuing using the status option `--status`, deleting a job from the queuing system using the delete option `--delete`, listing files in the working directory using the list option `--list` and the help option `--help` to print a summary of the available options.
The available options are the submission of new jobs to the queuing system using the submit option `--submit`, enabling
reservation for a job already submitted using the `--reservation` option, listing jobs on the queuing using the status
option `--status`, deleting a job from the queuing system using the delete option `--delete`, listing files in the
working directory using the list option `--list` and the help option `--help` to print a summary of the available
options.

## Submit job
Submission of jobs to the queuing system with the submit option `--submit` is similar to the submit job function `QueueAdapter().submit_job()`. Example call to submit the `hostname` command to the default queue:
Submission of jobs to the queuing system with the submit option `--submit` is similar to the submit job function
`QueueAdapter().submit_job()`. Example call to submit the `hostname` command to the default queue:
```
python -m pysqa --submit --command hostname
```
Expand All @@ -14,16 +21,21 @@ The options used and their short forms are:

Additional options for the submission of the job with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.
* `-q`, `--queue` the queue the job is submitted to. If this option is not defined the `primary_queue` defined in the configuration is used.
* `-q`, `--queue` the queue the job is submitted to. If this option is not defined the `primary_queue` defined in the
configuration is used.
* `-j`, `--job_name` the name of the job submitted to the queuing system.
* `-w`, `--working_directory` the working directory the job submitted to the queuing system is executed in.
* `-n`, `--cores` the number of cores used for the calculation. If the cores are not defined the minimum number of cores defined for the selected queue are used.
* `-n`, `--cores` the number of cores used for the calculation. If the cores are not defined the minimum number of cores
defined for the selected queue are used.
* `-m`, `--memory` the memory used for the calculation.
* `-t`, `--run_time` the run time for the calculation. If the run time is not defined the maximum run time defined for the selected queue is used.
* `-t`, `--run_time` the run time for the calculation. If the run time is not defined the maximum run time defined for
the selected queue is used.
* `-b`, `--dependency` other jobs the calculation depends on.

## Enable reservation
Enable reservation for a job already submitted to the queuing system using the reservation option `--reservation` is similar to the enable reservation function `QueueAdapter().enable_reservation()`. Example call to enable the reservation for a job with the id `123`:
Enable reservation for a job already submitted to the queuing system using the reservation option `--reservation` is
similar to the enable reservation function `QueueAdapter().enable_reservation()`. Example call to enable the reservation
for a job with the id `123`:
```
python -m pysqa --reservation --id 123
```
Expand All @@ -35,12 +47,14 @@ Additional options for enabling the reservation with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.

## List jobs
List jobs on the queuing system option `--status`, list calculations currently running and waiting on the queuing system for all users on the HPC cluster:
List jobs on the queuing system option `--status`, list calculations currently running and waiting on the queuing system
for all users on the HPC cluster:
```
python -m pysqa --status
```
The options used and their short forms are:
* `-s`, `--status` the status option lists the status of all calculation currently running and waiting on the queuing system.
* `-s`, `--status` the status option lists the status of all calculation currently running and waiting on the queuing
system.

Additional options for listing jobs on the queuing system with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.
Expand Down
30 changes: 21 additions & 9 deletions docs/debug.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,32 @@
# Debugging
The configuration of a queuing system adapter, in particular in a remote configuration with a local installation of `pysqa` communicating to a remote installation on your HPC can be tricky.
The configuration of a queuing system adapter, in particular in a remote configuration with a local installation of
`pysqa` communicating to a remote installation on your HPC can be tricky.

## Local Queuing System
To simplify the process `pysqa` provides a series of steps for debugging:

* When `pysqa` submits a calculation to a queuing system it creates an `run_queue.sh` script. You can submit this script using your batch command e.g. `sbatch` for `SLURM` and take a look at the error message.
* When `pysqa` submits a calculation to a queuing system it creates an `run_queue.sh` script. You can submit this script
using your batch command e.g. `sbatch` for `SLURM` and take a look at the error message.
* The error message the queuing system returns when submitting the job is also stored in the `pysqa.err` file.
* Finally, if the `run_queue.sh` script does not match the variables you provided, then you can test your template using `jinja2`: `Template(open("~/.queues/queue.sh", "r").read()).render(**kwargs)` here `"~/.queues/queue.sh"` is the path to the queuing system submit script you want to use and `**kwargs` are the arguments you provide to the `submit_job()` function.
* Finally, if the `run_queue.sh` script does not match the variables you provided, then you can test your template using
`jinja2`: `Template(open("~/.queues/queue.sh", "r").read()).render(**kwargs)` here `"~/.queues/queue.sh"` is the path
to the queuing system submit script you want to use and `**kwargs` are the arguments you provide to the `submit_job()`
function.

## Remote HPC
The failure to submit to a remote HPC cluster can be related with to an issue with the local `pysqa` configuration or an issue with the remote `pysqa` configuration. To identify which part is causing the issue, it is recommended to first test the remote `pysqa` installation on the remote HPC cluster:
The failure to submit to a remote HPC cluster can be related with to an issue with the local `pysqa` configuration or an
issue with the remote `pysqa` configuration. To identify which part is causing the issue, it is recommended to first
test the remote `pysqa` installation on the remote HPC cluster:

* Login to the remote HPC cluster and import `pysqa` on a python shell.
* Validate the queue configuration by importing the queue adapter using `from pysqa import QueueAdapter` then initialize the object from the configuration dictionary `qa = QueueAdapter(directory="~/.queues")`. The current configuration can be printed using `qa.config`.
* Try to submit a calculation to print the hostname from the python shell on the remote HPC cluster using the `qa.submit_job(command="hostname")`.
* If this works successfully then the next step is to try the same on the command line using `python -m pysqa --submit --command hostname`.

This is the same command the local `pysqa` instance calls on the `pysqa` instance on the remote HPC cluster, so if the steps above were executed successfully, then the remote HPC configuration seems to be correct. The final step is validating the local configuration to see the SSH connection is successfully established and maintained.
* Validate the queue configuration by importing the queue adapter using `from pysqa import QueueAdapter` then initialize
the object from the configuration dictionary `qa = QueueAdapter(directory="~/.queues")`. The current configuration can
be printed using `qa.config`.
* Try to submit a calculation to print the hostname from the python shell on the remote HPC cluster using the
`qa.submit_job(command="hostname")`.
* If this works successfully then the next step is to try the same on the command line using
`python -m pysqa --submit --command hostname`.

This is the same command the local `pysqa` instance calls on the `pysqa` instance on the remote HPC cluster, so if the
steps above were executed successfully, then the remote HPC configuration seems to be correct. The final step is
validating the local configuration to see the SSH connection is successfully established and maintained.
14 changes: 10 additions & 4 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Installation
The `pysqa` package can be installed either via `pip` or `conda`. While most HPC systems use Linux these days, the `pysqa` package can be installed on all major operation systems. In particular for connections to remote HPC clusters it is required to install `pysqa` on both the local system as well as the remote HPC cluster. In this case it is highly recommended to use the same version of `pysqa` on both systems.
The `pysqa` package can be installed either via `pip` or `conda`. While most HPC systems use Linux these days, the
`pysqa` package can be installed on all major operation systems. In particular for connections to remote HPC clusters it
is required to install `pysqa` on both the local system as well as the remote HPC cluster. In this case it is highly
recommended to use the same version of `pysqa` on both systems.

## pypi-based installation
`pysqa` can be installed from the python package index (pypi) using the following command:
Expand All @@ -9,15 +12,18 @@ pip install pysqa
On `pypi` the `pysqa` package exists in three different versions:

* `pip install pysaq` - base version - with minimal requirements only depends on `jinja2`, `pandas` and `pyyaml`.
* `pip install pysaq[sge]` - sun grid engine (SGE) version - in addition to the base dependencies this installs `defusedxml` which is required to parse the `xml` files from `qstat`.
* `pip install pysaq[remote]` - remote version - in addition to the base dependencies this installs `paramiko` and `tqdm`, to connect to remote HPC clusters using SSH and report the progress of the data transfer visually.
* `pip install pysaq[sge]` - sun grid engine (SGE) version - in addition to the base dependencies this installs
`defusedxml` which is required to parse the `xml` files from `qstat`.
* `pip install pysaq[remote]` - remote version - in addition to the base dependencies this installs `paramiko` and
`tqdm`, to connect to remote HPC clusters using SSH and report the progress of the data transfer visually.

## conda-based installation
The `conda` package combines all dependencies in one package:
```
conda install -c conda-forge pysqa
```
When resolving the dependencies with `conda` gets slow it is recommended to use `mamba` instead of `conda`. So you can also install `pysqa` using:
When resolving the dependencies with `conda` gets slow it is recommended to use `mamba` instead of `conda`. So you can
also install `pysqa` using:
```
mamba install -c conda-forge pysqa
```
Expand Down
Loading

0 comments on commit ad35248

Please sign in to comment.