Skip to content

Commit

Permalink
bug fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
astro-friedel committed Dec 10, 2024
1 parent 045182a commit c03a33c
Show file tree
Hide file tree
Showing 4 changed files with 26 additions and 24 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -121,3 +121,7 @@ ENV/

# emacs buffers
\#*

docs/stubs/

docs/1-parsl-introduction.ipynb
4 changes: 2 additions & 2 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ help:
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " latexpdf to make LaTeX files and run them through pdflatex (currently does not work)"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx (currently does not work)"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
Expand Down
26 changes: 12 additions & 14 deletions docs/userguide/configuring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ How to Configure

.. note::
All configuration examples below must be customized for the user's allocation, Python environment,
file system, etc.
file system, etc.


The configuration specifies what, and how, resources are to be used for executing the Parsl program
Expand Down Expand Up @@ -185,7 +185,7 @@ Stepping through the following question should help formulate a suitable configu

.. note::
If using a Cray system, you most likely need to use the `parsl.launchers.AprunLauncher` to launch
workers unless you are on a **native Slurm** system like :ref:`configuring_nersc_cori`
workers unless you are on a **native Slurm** system like :ref:`configuring_nersc_cori`


Heterogeneous Resources
Expand Down Expand Up @@ -285,17 +285,17 @@ Then add the following to your config:
.. note::
There will be a noticeable delay the first time Work Queue sees an app; it is creating and
packaging a complete Python environment. This packaged environment is cached, so subsequent app
invocations should be much faster.
packaging a complete Python environment. This packaged environment is cached, so subsequent app
invocations should be much faster.

Using this approach, it is possible to run Parsl applications on nodes that don't have Python
available at all. The packaged environment includes a Python interpreter, and Work Queue does not
require Python to run.

.. note::
The automatic packaging feature only supports packages installed via ``pip`` or ``conda``.
Importing from other locations (e.g. via ``$PYTHONPATH``) or importing other modules in the same
directory is not supported.
Importing from other locations (e.g. via ``$PYTHONPATH``) or importing other modules in the same
directory is not supported.


Accelerators
Expand Down Expand Up @@ -454,10 +454,8 @@ connect to AWS.
.. literalinclude:: ../../parsl/configs/ec2.py


ASPIRE 1 (NSCC)
---------------

.. image:: https://www.nscc.sg/wp-content/uploads/2017/04/ASPIRE1Img.png
ASPIRE 1 (NSCC) (Decommissioned)
--------------------------------

The following snippet shows an example configuration for accessing NSCC's **ASPIRE 1** supercomputer.
This example uses the `parsl.executors.HighThroughputExecutor` executor and connects to ASPIRE1's
Expand Down Expand Up @@ -635,13 +633,13 @@ Polaris uses `parsl.providers.PBSProProvider` and `parsl.launchers.MpiExecLaunch
onto the HPC system.


Stampede2 (TACC)
----------------
Stampede2 (TACC) (Decommissioned)
---------------------------------

.. image:: https://www.tacc.utexas.edu/documents/1084364/1413880/stampede2-0717.jpg/
.. image:: https://tacc.utexas.edu/media/filer_public_thumbnails/filer_public/5d/7c/5d7cd2e7-b2a0-461c-9b91-ecb608e85884/stampede2.jpg__992x992_q85_subsampling-2.jpg

The following snippet shows an example configuration for accessing TACC's **Stampede2**
supercomputer. This example uses theHighThroughput executor and connects to Stampede2's Slurm
supercomputer. This example uses the HighThroughput executor and connects to Stampede2's Slurm
scheduler.

.. literalinclude:: ../../parsl/configs/stampede2.py
Expand Down
16 changes: 8 additions & 8 deletions docs/userguide/monitoring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ example, if the full path to the database is ``/tmp/my_monitoring.db``, run::
$ parsl-visualize sqlite:////tmp/my_monitoring.db

By default, the visualization web server listens on ``127.0.0.1:8080``. If the web server is
deployed on a machine with a web browser, the dashboard can be accessed in the browser at `
`127.0.0.1:8080``. If the web server is deployed on a remote machine, such as the login node of a
deployed on a machine with a web browser, the dashboard can be accessed in the browser at
``127.0.0.1:8080``. If the web server is deployed on a remote machine, such as the login node of a
cluster, you will need to use an ssh tunnel from your local machine to the cluster::

$ ssh -L 50000:127.0.0.1:8080 username@cluster_address
Expand All @@ -76,8 +76,8 @@ This command will bind your local machine's port 50000 to the remote cluster's p
The dashboard can then be accessed via the local machine's browser at ``127.0.0.1:50000``.

.. warning:: Alternatively you can deploy the visualization server on a public interface. However,
first check that this is allowed by the cluster's security policy. The following example shows how
to deploy the web server on a public port (i.e., open to Internet via ``public_IP:55555``)::
first check that this is allowed by the cluster's security policy. The following example shows how
to deploy the web server on a public port (i.e., open to Internet via ``public_IP:55555``)::

$ parsl-visualize --listen 0.0.0.0 --port 55555

Expand Down Expand Up @@ -107,17 +107,17 @@ times as well as task summary statistics. The workflow summary section is follow
The workflow summary also presents three different views of the workflow:

* Workflow DAG - with apps differentiated by colors: This visualization is useful to visually
inspect the dependency structure of the workflow. Hovering over the nodes in the DAG shows a tooltip
for the app represented by the node and it's task ID.
inspect the dependency structure of the workflow. Hovering over the nodes in the DAG shows a tooltip
for the app represented by the node and it's task ID.

.. image:: ../images/mon_task_app_grouping.png

* Workflow DAG - with task states differentiated by colors: This visualization is useful to identify
what tasks have been completed, failed, or are currently pending.
what tasks have been completed, failed, or are currently pending.

.. image:: ../images/mon_task_state_grouping.png

* Workflow resource usage: This visualization provides resource usage information at the workflow
level. For example, cumulative CPU/Memory utilization across workers over time.
level. For example, cumulative CPU/Memory utilization across workers over time.

.. image:: ../images/mon_resource_summary.png

0 comments on commit c03a33c

Please sign in to comment.