Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release/public-v2.1.0]: Update the docs, add an errata file #557

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions ERRATA.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# UFS Short-Range Weather Application
# Errata for the v2.1.0 release

GitHub Repository: https://github.com/ufs-community/ufs-srweather-app

Release date: Nov. 17, 2022

Release tag for the v2.1.0: **ufs-srw-v2.1.0**

Branch for the v.2.1.0: **release/public-v2.1.0**

Errata below affects fixes and corrections for the _branch_ only

* Jan. 24, 2023 - corrections for the Documentation, Section on runnning UFS SRW on MacOS and generic Linux using standalone wrapper scripts for each task.

40 changes: 36 additions & 4 deletions docs/UsersGuide/source/RunSRW.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1158,7 +1158,7 @@ The regional workflow can be run using standalone shell scripts in cases where t

When working on an HPC system, users should allocate a compute node prior to running their experiment. The proper command will depend on the system's resource manager, but some guidance is offered in :numref:`Section %s <WorkOnHPC>`. It may be necessay to reload the regional workflow (see :numref:`Section %s <SetUpPythonEnv>`). It may also be necessary to load the ``build_<platform>_<compiler>`` scripts as described in :numref:`Section %s <CMakeApproach>`.

#. ``cd`` into the experiment directory. For example, from ``ush``, presuming default directory settings:
#. ``cd`` into the experiment directory that has been created after generating the regional workflow (:numref:`Section %s <GenerateWorkflow>`). For example, from ``ush``, presuming default directory settings:

.. code-block:: console

Expand All @@ -1180,11 +1180,43 @@ The regional workflow can be run using standalone shell scripts in cases where t

before running the wrapper scripts.

#. Copy the wrapper scripts from the ``ush`` directory into the experiment directory. Each workflow task has a wrapper script that sets environment variables and runs the job script.
#. Copy the wrapper scripts from the ``ush`` directory into the experiment directory. Each workflow task has a wrapper script that sets environment variables and runs the job script. If ``SRW=<path/to>/ufs-srweather-app``, then

.. code-block:: console

cp <path/to>/ufs-srweather-app/ush/wrappers/* .
cp $SRW/ush/wrappers/* .

Substitute shell-script headers in all run_*.sh scripts as following using ``gsed`` on MacOS or ``sed`` on Linux:

.. code-block:: console

gsed -i -e "s|\/bin\/sh|\/usr\/bin\/env bash|" run_*.sh # MacOS

.. code-block:: console

sed -i -e "s|\/bin\/sh|\/usr\/bin\/env bash|" run_*.sh # Linux

Note that if you attempt to copy-paste the above commands, the double quotes in the ``sed`` or ``gsed`` commands may not be copied properly to your terminal window. It is safer to retype the double-quotes in a terminal. Alternatively, you could use the text editor to replace the shell-script headers (the first line) to ``#!/usr/bin/env bash``.

#. Substitutions of Bash script headers are also needed for scripts in directories ``$SRW/jobs`` and ``$SRW/scripts``. Similarly to the previous step, the substitution could use streamline editor ``gsed`` on MacOS, ``sed`` on Linux, or any text editor.

.. code-block:: console

gsed -i -e "s|\/bin\/bash|\/usr\/bin\/env bash|" $SRW/jobs/JREGIONAL_* # MacOS
gsed -i -e "s|\/bin\/bash|\/usr\/bin\/env bash|" $SRW/scripts/exregional* # MacOS

.. code-block:: console

sed -i -e "s|\/bin\/bash|\/usr\/bin\/env bash|" $SRW/jobs/JREGIONAL_* # Linux
sed -i -e "s|\/bin\/bash|\/usr\/bin\/env bash|" $SRW/scripts/exregional* # Linux

#. Set ulimit to soft resource limits, using either ``sed`` or ``gsed`` as in previous steps:

.. code-block:: console

ulimit -S -s unlimited
gsed -i -e "s|ulimit \-s|ulimit \-S \-s|" $SRW/scripts/exregional* # MacOS
sed -i -e "s|ulimit \-s|ulimit \-S \-s|" $SRW/scripts/exregional* # Linux

#. Set the ``OMP_NUM_THREADS`` variable.

Expand Down Expand Up @@ -1257,7 +1289,7 @@ Users can access log files for specific tasks in the ``$EXPTDIR/log`` directory.
If any of the scripts return an error that "Primary job terminated normally, but one process returned a non-zero exit code," there may not be enough space on one node to run the process. On an HPC system, the user will need to allocate a(nother) compute node. The process for doing so is system-dependent, and users should check the documentation available for their HPC system. Instructions for allocating a compute node on NOAA HPC systems can be viewed in :numref:`Section %s <WorkOnHPC>` as an example.

.. note::
On most HPC systems, users will need to submit a batch job to run multi-processor jobs. On some HPC systems, users may be able to run the first two jobs (serial) on a login node/command-line. Example scripts for Slurm (Hera) and PBS (Cheyenne) resource managers are provided (``sq_job.sh`` and ``qsub_job.sh``, respectively). These examples will need to be adapted to each user's system. Alternatively, some batch systems allow users to specify most of the settings on the command line (with the ``sbatch`` or ``qsub`` command, for example).
On most HPC systems, users will need to submit a batch job to run multi-processor jobs. On some HPC systems, users may be able to run the first jobs (serial) on a login node/command-line. Example scripts for Slurm (Hera) and PBS (Cheyenne) resource managers are provided (``sq_job.sh`` and ``qsub_job.sh``, respectively). These examples will need to be adapted to each user's system. Alternatively, some batch systems allow users to specify most of the settings on the command line (with the ``sbatch`` or ``qsub`` command, for example).



Expand Down