Skip to content

Commit

Permalink
Merge pull request #249 from SouthernMethodistUniversity/update_work
Browse files Browse the repository at this point in the history
Update work
  • Loading branch information
jrlagrone authored Nov 8, 2024
2 parents c840fc2 + f037fcc commit de23f88
Show file tree
Hide file tree
Showing 5 changed files with 32 additions and 26 deletions.
6 changes: 3 additions & 3 deletions docs/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ their work.
| Total GPU Cores | 0 | 132,608 | 275,968 | 1,392,640 | 122,880 |
| Total Memory | 29.2 TB | 116.5 TB | 120 TB | 52.5 TB | 103 TB |
| Network Bandwidth | 20 Gb/s | 100 Gb/s | 100 Gb/s | 200 Gb/s | 200 Gb/s |
| Work Storage | None | None | 768 TB | 3.5 PB* | 3.5 PB* |
| Project Storage | None | None | 768 TB | 3.5 PB* | 3.5 PB* |
| Scratch Space | 1.4 PB | 1.4 PB | 2.8 PB | 750 TB | 3.5 PB |
| Archive Capabilities | No | Yes | Yes | No | No |
| Operating System | Scientific Linux 6 | CentOS 7 | CentOS 7 | Ubuntu 22.04 | Ubuntu 22.04 |

\* The 3.5 PB `Work Storage` is shared on M3 and the SuperPOD.
\* The 3.5 PB `Project Storage` is shared on M3 and the SuperPOD. It was formerly referred to as `Work Storage`, which has been deprecated and will be phased out beginning on January 15, 2025.

## ManeFrame III (M3)

Expand Down Expand Up @@ -62,7 +62,7 @@ their work.
| GPU Accelerator Cores | 1,392,640 |
| Total Memory | 52.5 TB |
| Interconnect Bandwidth | 10x200 Gb/s Infiniband Connections Per Node |
| Work Storage | 3.5 PB (Shared with M3) |
| Project Storage | 3.5 PB (Shared with M3) |
| Scratch Storage | 750 TB (Raw) |
| Operating System | Ubuntu 22.04 |

Expand Down
20 changes: 10 additions & 10 deletions docs/examples/conda/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ module load conda

### User Installation

You can also install your own versions of Conda in your `$WORK` or `$HOME` directory.
You can also install your own versions of Conda in your `$HOME` directory.
We recommend

- Micromamba: <https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html>
Expand All @@ -37,24 +37,24 @@ In most cases, you can source your shell profile to avoid having to log out.
For most users, this is `source ~/.bashrc`.

We additionally recommend that you disable Conda's auto-activate base
functionallity. By default, Conda will load a base environment, which can cause
functionality. By default, Conda will load a base environment, which can cause
issues with system dependencies. In particular, applications on
<https://hpc.m3.smu.edu> often behave in unexpected ways becuase it tries to
<https://hpc.m3.smu.edu> often behave in unexpected ways because it tries to
use a Conda package instead of the correct system package.
The next two commands tell Conda to prefer to save packages and environments
in your `$WORK` directory so they don't take up space in your `$HOME`.
in your `$HOME` directory (you can specify other locations you have access to,
but performance is generally better in `$HOME`).

```bash
conda config --set auto_activate_base false
conda config --prepend envs_dirs $WORK/.conda/envs
conda config --prepend pkgs_dirs $WORK/.conda/pkgs
conda config --prepend envs_dirs $HOME/.conda/envs
conda config --prepend pkgs_dirs $HOME/.conda/pkgs
```

## Creating Virtual Environments from the Command Line

For simple environments with a small number of packages, you can create an
environment named `conda_env` (or any name of your choosing) in your `$WORK`
directory with
environment named `conda_env` (or any name of your choosing)

```bash
conda create -n conda_env python=3.9 package1 package2 package3
Expand All @@ -63,7 +63,7 @@ conda create -n conda_env python=3.9 package1 package2 package3
The `-n` tells Conda what to name the environment. Here, we request Python
version 3.9 and the packages `package1 package2 package3` which are the
packages you'd like to install (e.g. `numpy`, `tensorflow`, `pandas`, etc.). In
general, it is a good idea install all the packages at the same time becasue
general, it is a good idea install all the packages at the same time because
Conda will do a better job of resolving dependencies.

## Creating Virtual Environments From environment.yml File
Expand Down Expand Up @@ -103,7 +103,7 @@ The next section is `dependencies` and this is where you should list all of the
packages you would like to install. If you have packages that need to be
installed with `pip`, you should include `pip` in the dependencies as above and
you can list the specific packages like the above as `pip_package1`, etc.
and/or you can have all the `pip` packages in a `requirments.txt` file.
and/or you can have all the `pip` packages in a `requirements.txt` file.

Once you have made the `environment.yml` file, you can create the environment
with:
Expand Down
14 changes: 10 additions & 4 deletions docs/policies/policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,14 @@ Data older than 60 days will be purged (deleted) without warning.
`$SCRATCH` is intended as high performance, temporary storage for jobs.

If data is needed for a longer period of time, it should be stored in
`$WORK` (8 TB limit) or `$HOME` directories (200 GB). If those are also
insuffiencient, please contact us to discuss options.
ColdFront storage allocations (limits vary and require justification) or `$HOME` directories (200 GB). If those are also
insufficient, please contact us to discuss options.

:::{important} SMU does not currently have facilities for archival storage of large datasets.
Most of our HPC storage is redundant, but it is not (and cannot) be backed up.
Storage space is also limited and current and active usage is prioritized.
Please contact us to discuss needs and potential options [contact us](about/contact.md)
:::

## Account and Account Password Sharing Policy

Expand All @@ -30,6 +36,6 @@ one.

## Data Transfer Nodes

Access to data tranfer nodes is available by request and legitimate need.
These nodes are meant only for transferring large amounts data to/from HPC resources.
Access to data transfer nodes is available by request and legitimate need.
These nodes are meant only for transferring large amounts data to/from HPC resources.
They should not be used for computational jobs.
12 changes: 6 additions & 6 deletions motd/m3/cli_motd.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ Help: [email protected] with "[HPC]" in subject line

Storage Locations:

Variable | Quota | Usage
------------ | ------- | -----------------------------------------------------
$HOME | 200 GB | Home directory, backed up
$WORK | 8 TB | Long term storage
$SCRATCH | 60 days | Temporary scratch space
$M2HOME | 200 GB | Read-only access to M2 $HOME (ending August 30, 2024)
Variable or Path | Quota | Usage
---------------- | ------- | ---------------------------------------------------
$HOME | 200 GB | Home directory, backed up
$WORK | 0 | Deprecated. Phase out beginning January 15, 2025
$SCRATCH | 60 days | Temporary scratch space
/projects | varies | ColdFront storage allocations

*Do not* use login nodes or $HOME for calculations

Expand Down
6 changes: 3 additions & 3 deletions motd/m3/ood_motd.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
# Storage Locations and Quotas

* Home directories, `$HOME`, are backed up and have default quota of 200 GB.
* Work directories, `$WORK`, are for longer term storage are *not* backed up
and have a default quota of 8 TB.
* Scratch directories, `$SCRATCH`, are for tempoary storage and files older
* Work directories, `$WORK`, deprecated and being phased out beginning January 15, 2025
* Scratch directories, `$SCRATCH`, are for temporary storage and files older
than 60 days will be deleted.
* Project directories, '/projects/', are storage allocations associated with a project. Sizes vary based on needs. Allocations are valid for 1 year and are eligible to be renewed based on need

# Workshops and Events

Expand Down

0 comments on commit de23f88

Please sign in to comment.