diff --git a/docs/clusters/Ige/ige-calcul1.md b/docs/clusters/Ige/ige-calcul1.md index db7528c..624fea0 100644 --- a/docs/clusters/Ige/ige-calcul1.md +++ b/docs/clusters/Ige/ige-calcul1.md @@ -3,9 +3,9 @@ # IGE clusters -IGE computing servers are ige-calcul1, ige-calcul2, ige-calcul3, ige-calcul4 +IGE computing servers are ige-calcul1, ige-calcul2, ige-calcul3, ige-calcul4. -You can replace calcul1 by calcul2, calcul3 or calcul4 in the following documentation according to your use +You can replace calcul1 by calcul2, calcul3 or calcul4 in the following documentation according to your use. ## Slurm @@ -13,39 +13,44 @@ You can replace calcul1 by calcul2, calcul3 or calcul4 in the following document Slurm is an open-source workload manager/scheduler for the Discovery cluster. Slurm is basically the intermediary between the Login nodes and compute nodes. Hence, the Slurm scheduler is the gateway for the users on the login nodes to submit work/jobs to the compute nodes for processing. -The [official documentation for slurm](https://slurm.schedmd.com/quickstart.html) +The [official documentation for slurm](https://slurm.schedmd.com/quickstart.html). ## Connection to the server -Before using slurm, make sure that your are able to connect to the server +Before using slurm, make sure that you are able to connect to the server: ``` ssh your_agalan_login@ige-calcul1.u-ga.fr ``` -If you want to connect without using a password and from outside the lab, add these 4 lines to the file $HOME/.ssh/config (create it if you don't have it) +If you want to connect without using a password and from outside the lab, add these 4 lines to the file $HOME/.ssh/config (create it if you don't have it): ``` Host calcul1 -ProxyCommand ssh -qX your_agalan_login@ige-ssh.u-ga.fr nc -w 60 ige-calcul1.u-ga.fr 22 -User your_agalan_login -GatewayPorts yes -``` -then you should create and copy your ssh keys to the server + ProxyCommand ssh -qX your_agalan_login@ige-ssh.u-ga.fr nc -w 60 ige-calcul1.u-ga.fr 22 + User your_agalan_login + GatewayPorts yes ``` + +then you should create and copy your ssh keys to the server: + +```bash ssh-keygen -t rsa (tape Enter twice without providing a password) -ssh-copy-id your_agalan_login@ige-ssh.u-ga.fr +ssh-copy-id your_agalan_login@ige-ssh.u-ga.fr ssh-copy-id calcul1 ``` -Now, you should be able to connect without any password -``` + +Now, you should be able to connect without any password: + +```bash ssh calcul1 ``` -Then you should ask for a storage space and a slurm account +Then you should ask for a storage space and a slurm account. Available slurm accounts are: + ``` cryodyn meom @@ -57,9 +62,11 @@ ecrins ice3 chianti ``` -Please send and email to `mondher.chekki@uXXXX-gYYYY-aZZZZ.fr OR ige-support@uXXXX-gYYYY-aZZZZ.fr, asking for storage under /workdir and a slurm account by providing the name of your team and the space you need (1G,10G,100G,1TB) -## Available softwares +Please send an email to `mondher.chekki@uXXXX-gYYYY-aZZZZ.fr OR ige-support@uXXXX-gYYYY-aZZZZ.fr, asking for storage under /workdir and a slurm account by providing the name of your team and the space you need (1G,10G,100G,1TB). + + +## Available software ``` - NCO @@ -68,24 +75,24 @@ Please send and email to `mondher.chekki@uXXXX-gYYYY-aZZZZ.fr OR ige-support@uXX - NCVIEW - QGIS - MATLAB (through modules,i.e: module load matlab) - ``` -## Commands -| Command | Syntax | Description | -| ------------- |:-------------:|:-------------:| -| sbatch |```sbatch JOBSCRIPT``` |Submit a batch script to Slurm for processing. | -| squeue | ```squeue -u``` |Show information about your job(s) in the queue. The command when run without the -u flag, shows a list of your job(s) and all other jobs in the queue. | -| srun | ```srun -n $NBTASKS $EXE``` | Run jobs interactively on the cluster | -| srun | ```srun --mpi=pmix -n $NBTASKS $EXE``` | Run MPI jobs on the cluster | -| scancel | ```scancel JOBID``` | End or cancel a queued job. | -| sacct | ```sacct -j JOBID``` | Show information about current and previous jobs (cf 5. Job Accounting for example) | -| scontrol | ```scontrol show job JOBID``` | Show more details about a running job | -| sinfo | ```sinfo``` | Get information about the resources on available nodes that make up the HPC cluster | +## Commands + +| Command | Syntax | Description | +| ---------|:-------------:|:-------------:| +| sbatch | `sbatch JOBSCRIPT` | Submit a batch script to Slurm for processing. | +| squeue | `squeue -u` | Show information about your job(s) in the queue. The command when run without the -u flag, shows a list of your job(s) and all other jobs in the queue. | +| srun | `srun -n $NBTASKS $EXE` | Run jobs interactively on the cluster | +| srun | `srun --mpi=pmix -n $NBTASKS $EXE` | Run MPI jobs on the cluster | +| scancel | `scancel JOBID` | End or cancel a queued job. | +| sacct | `sacct -j JOBID` | Show information about current and previous jobs (cf 5. Job Accounting for example) | +| scontrol | `scontrol show job JOBID` | Show more details about a running job | +| sinfo | `sinfo` | Get information about the resources on available nodes that make up the HPC cluster | -## Job submission example +## Job submission example Consider you have a script in one of the programming languages such as Python, MatLab, C, Fortran , or Java. How would you execute it using Slurm? @@ -93,17 +100,17 @@ The following section explains a step by step process to creating and submitting 1. Prepare your data/code/script -Copy your files to the server with rsync +Copy your files to the server with rsync: -``` +```bash rsync -rav YOUR_DIRECTORY calcul1:/workdir/your_slurm_account/your_agalan_login/ ``` -Then Write your python script or compile your fortran code +Then write your python script or compile your fortran code. **Example of Hello World in MPI `hello_mpi.f90`** -``` +```fortran PROGRAM hello_world_mpi include 'mpif.h' @@ -123,26 +130,29 @@ call MPI_FINALIZE(ierror) END PROGRAM ``` -Compile the code using mpif90 -``` +Compile the code using mpif90: + +```bash mpif90 -o hello_mpi hello_mpi.f90 ``` -Now you have an executable hello_mpi that you can run using slurm + +Now you have an executable hello_mpi that you can run using slurm. 2. Create your submission job A job consists in two parts: **resource requests** and **job steps**. -**Resource requests** consist in a number of CPUs, computing expected duration, amounts of RAM or disk space, etc. -**Job steps** describe tasks that must be done, software which must be run. +* **Resource requests** consist in a number of CPUs, computing expected duration, amounts of RAM or disk space, etc. +* **Job steps** describe tasks that must be done, software which must be run. The typical way of creating a job is to write a submission script. A submission script is a shell script. If they are prefixed with SBATCH, are understood by Slurm as parameters describing resource requests and other submissions options. You can get the complete list of parameters from the sbatch manpage man sbatch or sbatch -h. In this example, `job.sh` contains ressources request (lines starting with #SBATCH) and the run of the previous generated executable. -``` +```bash #!/bin/bash -#SBATCH -J helloMPI + +#SBATCH -J helloMPI #SBATCH --nodes=1 #SBATCH --ntasks=4 @@ -154,80 +164,82 @@ In this example, `job.sh` contains ressources request (lines starting with #SBAT #SBATCH --output helloMPI.%j.output #SBATCH --error helloMPI.%j.error - cd /workdir/$USER/ ## Run an MPI program - srun --mpi=pmix -N 1 -n 4 ./hello_mpi - -## Run a python script +## Run a python script # python script.py - ``` -```job.sh``` request 4 cores for 1 hour, along with 4000 MB of RAM, in the default queue. -The account is important in order to get statisticis about the number of CPU hours consumed within the account: -_make sure to be part of an acccount before submitting any jobs_ +`job.sh` request 4 cores for 1 hour, along with 4000 MB of RAM, in the default queue. -When started, the job would run the hello_mpi program using 4 cores in parallel. -To run the `job.sh` script use ```sbatch``` command and ```squeue``` to see the state of the job +The account is important in order to get statisticis about the number of CPU hours consumed within the account: _make sure to be part of an acccount before submitting any jobs_ -``` +When started, the job would run the hello_mpi program using 4 cores in parallel. To run the `job.sh` script use `sbatch` command and `squeue` to see the state of the job: + +```bash chekkim@ige-calcul1:~$ sbatch job.sh Submitted batch job 51 chekkim@ige-calcul1:~$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 51 calcul helloMPI chekkim R 0:02 1 ige-calcul1 ``` -3. Interactive mode - -For interactive mode you should use the srun/salloc commands +3. Interactive mode -Either you get the ressources using **srun** followed by **--pty bash -i** -Then you can run any program you need +For interactive mode you should use the srun/salloc commands. -Or you use **srun** followed by **your program** and then it will allocate the ressource , run the program and exit +Either you get the ressources using **srun** followed by **--pty bash -i**. Then you can run any program you need. +Or you use **srun** followed by **your program** and then it will allocate the ressource, run the program and exit. An equivalent to the `job.sh` will be : - Run mpi hello example with 4 cores -```srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 hello_mpi``` +```bash +srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 hello_mpi +``` ==> This will run and exit once it is done or -```srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 --pty bash -i -srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 hello_mpi``` +```bash +srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 --pty bash -i +srun --mpi=pmix -n 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 hello_mpi +``` ==> keep the ressources even when the program is done - Run Qgis with 8 threads (graphic interface) -```srun --mpi=pmix -n 1 -c 8 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 qgis``` +```bash +srun --mpi=pmix -n 1 -c 8 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 qgis +``` - Run Jupiter notebook with 4 threads -```srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 jupyter notebook``` +```bash +srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 jupyter notebook +``` - Run matlab with 4 threads -```module load matlab/R2022b +```bash +module load matlab/R2022b srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 matlab -nodisplay -nosplash -nodesktop -r "MATLAB_command" -or +# or srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 matlab -nodisplay -nosplash -nodesktop -batch "MATLAB_command" -or +# or srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 matlab -nodisplay -nosplash -nodesktop < test.m ``` - - Example of job_matlab.sh : + - Example of job_matlab.sh : -``` +```bash #!/bin/bash #SBATCH -J matlab @@ -245,36 +257,33 @@ srun --mpi=pmix -n 1 -c 4 -N 1 --account=cryodyn --mem=4000 --time=01:00:00 matl cd /workdir/$USER/ ## Run on Matlab - module load matlab/R2022b srun --mpi=pmix -n 1 -c 4 -N 1 matlab -nodisplay -nosplash -nodesktop -r "MATLAB_command" -or +# or srun --mpi=pmix -n 1 -c 4 -N 1 matlab -nodisplay -nosplash -nodesktop -batch "MATLAB_command" -or +# or srun --mpi=pmix -n 1 -c 4 -N 1 matlab -nodisplay -nosplash -nodesktop < test.m - ``` -4. For Python users +4. For Python users + +We recommend that you use [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html) instead of conda/miniconda. -We recommend that youuse [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html) instead of conda/miniconda - Micromamba is just faster then conda ! - -Check [here](../../clusters/Tools/micromamba.md) how to set up your python environement with micromamba +Check [here](../../clusters/Tools/micromamba.md) how to set up your python environement with micromamba. -5. Job Accounting +5. Job Accounting -Interestingly, you can get near-realtime information about your running program (memory consumption, etc.) with the sstat command +Interestingly, you can get near-realtime information about your running program (memory consumption, etc.) with the sstat command: -``` +```bash sstat -j JOBID ``` -It is possible to get informations and statistics about you job after they are finished using the **sacct/sreport** command (**sacct -e** for more help) +It is possible to get informations and statistics about you job after they are finished using the **sacct/sreport** command (**sacct -e** for more help): -``` +```bash chekkim@ige-calcul1:~$ sacct -j 51 --format="Account,JobID,JobName,NodeList,CPUTime,elapsed,MaxRSS,State%20" Account JobID JobName NodeList CPUTime MaxRSS State ---------- ------------ ---------- --------------- ---------- ---------- -------------------- diff --git a/docs/clusters/Tools/micromamba.md b/docs/clusters/Tools/micromamba.md index 784f51c..aac052a 100644 --- a/docs/clusters/Tools/micromamba.md +++ b/docs/clusters/Tools/micromamba.md @@ -3,8 +3,7 @@ # Micromamba 1. Download and Install micromamba - -``` +```bash cd $WORKDIR curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba ./bin/micromamba shell init -s bash -p $WORKDIR/micromamba @@ -15,44 +14,36 @@ source ~/.bashrc WORKDIR is a large filesystem, do not use the HOME directory for installation ``` -2. Create an environment with python=3.10 - -``` +2. Create an environement with python=3.10 + +```bash micromamba create -n myenv python=3.10 -c conda-forge ``` +3. Activate the environement and install a package -3. Activate the environment and Install a package - -``` +```bash micromamba activate myenv - micromamba install YOUR_MODULE -c conda-forge ``` +>:warning: In a submission job, you will probably add the following to activate your environment -```{warning} -In a submission job, you will probably add the following to activate your account -``` - -``` +```bash . $WORKDIR/micromamba/etc/profile.d/micromamba.sh micromamba activate myenv - ``` **Example:** Create R environment and install R packages +```bash +# Create an environement with python=3.10 +micromamba create -n Renv python=3.10 -c conda-forge -- Create an environment with python=3.10 - ```micromamba create -n Renv python=3.10 -c conda-forge``` -- Activate the environment - ```micromamba activate Renv``` -- Install R+ netcdf package +# Activate the environement +micromamba activate Renv -``` -micromamba install r r-base r-essentials –c conda-forge +# Install R+ netcdf package +micromamba install r r-base r-essentials –c conda-forge micromamba install r-ncdf4 –c conda-forge ``` - - diff --git a/docs/clusters/Tools/vscode.md b/docs/clusters/Tools/vscode.md index 4e870d1..c4d2af5 100644 --- a/docs/clusters/Tools/vscode.md +++ b/docs/clusters/Tools/vscode.md @@ -1,47 +1,44 @@ (vscode)= -# Vscode +# Run vscode directly on dahu frontend or a dahu node -## Use vscode to run directly on a dahu or dahu node +Install the [Remote SSH extension](https://code.visualstudio.com/docs/remote/ssh). Ensure you have set up your `$HOME/.ssh/config` file and ssh keys so you can access dahu without any password. See [SSH-keys](../Gricad/dahu.md) for details. -Prior to this, you need first to set up the config file and the ssh keys so you can have acces to dahu without any password -cf [SSH-keys](../Gricad/dahu.md) +## Run on dahu -**on dahu** +Open vscode on your local machine, and open a remote window to dahu (vscode should see your local ssh config file and so recognize the 'dahu' host). You will be connected to dahu frontend, and you should be to open your files/save them and submit a job from vscode terminal. -If you run vscode from you home directory, it has access to the config file so it sees the dahu config, just select dahu. You will be connected to dahu front node, and you should be to open your files/save them and submit a job from vscode terminal +## Run on a dahu node -**on dahu node** +If you want to edit or run code interactively in a job on dahu, you must do some additional configuration. You must be using a linux terminal (e.g. mobaxterm/putty) or you can do this using vscode. -Once you get the ressource on dahu, you will probably need to run your python code or R code interactively, either you are using a linux terminal(mobaxterm/putty) or you can do this using vscode specially if you have a windows machine - -Of course all **these steps are not necessary**, you can just open a new terminal once you are connect to dahu with vscode and use oarsub to request the ressource ans continue on the terminal - -The following steps are more adapted to people who are not familiar with linux editing text as vi/vim and would like to work in a windows style, directly from vscode +Of course all **these steps are not necessary**, you can just open a new terminal once you are connect to dahu with vscode and use oarsub to request the ressource ans continue on the terminal. +The following steps are more adapted to people who are not familiar with terminal-based text editing tools like vi/vim and would like to work in a GUI style, directly from vscode. **Config ssh-keys on dahu** -``` +```bash +# On dahu frontend ssh-keygen -t rsa ``` - -On your workstation: - -Put the ssh key from dahu in a VSCODE folder +On your workstation, put the ssh key from dahu in a VSCODE folder: + +```bash +# On your local machine +mkdir ~/VSCODE +cd ~/VSCODE +scp dahu:~/.ssh/id_rsa . ``` -mkdir VSCODE; cd VSCODE ; scp dahu:~/.ssh/id_rsa . -``` - - -**Run job** - - -You are now connected to Dahu and you need to run a job: - - ``` -login_agalan@f-dahu:~$ oarsub -k -i .ssh/id_rsa -I -l nodes=1/core=1,walltime=01:00:00 --project sno-elmerice + +**Run a job on dahu** + +Connected to dahu and run an interactive job: + + ```bash +# On dahu frontend +login_agalan@f-dahu:~$ oarsub -k -i .ssh/id_rsa -I -l nodes=1/core=1,walltime=01:00:00 --project sno-elmerice [FAST] Adding fast resource constraints [PARALLEL] Small jobs (< 32 cores) restricted to tagged nodes [ADMISSION RULE] Modify resource description with type constraints @@ -52,25 +49,22 @@ Starting... Connect to OAR job 21106958 via the node dahu34 ``` - -On your workstation: -Add these lines to $HOME/.ssh/config file. Here make sure that the dahu node (**dahu34**) is the one assigned by OAR +Now, on your workstation add these lines to `$HOME/.ssh/config` file. **Make sure that the dahu node (here dahu34) matches the one assigned to your OAR job**. ``` -Host dahunode -ProxyCommand ssh dahu -W "dahu34:%p" -User oar -Port 6667 -IdentityFile ~/VSCODE/id_rsa -ForwardAgent no - ```` - -Make sure to change the name of node, each time you start a new connection , depending on the node you get, here it is dahu34 - -On Vscode, for remote access, it will ask you to choose a server name from the config file, choose **dahunode** -If you just need to acces to dahu, just select **dahu** instead of **dahunode** - -**Debuging issue:** - -In order to check if vscode is able to connect to a node , once you get the node, you can open a terminal from vscode and type "ssh dahunode", you should get the assigned node +Host dahunode + ProxyCommand ssh dahu -W "dahu34:%p" + User oar + Port 6667 + IdentityFile ~/VSCODE/id_rsa + ForwardAgent no +``` + +Make sure to change the name of dahu node, each time you start a new connection, depending on the node you get. Here it is dahu34. + +On Vscode, for remote access, it will ask you to choose a server name from the config file, choose **dahunode**. If you just need to acces to dahu, just select **dahu** instead of **dahunode**. + +### Debuging + +In order to check if vscode is able to connect to a node, once you get the node, you can open a terminal from vscode and type "ssh dahunode", you should get the assigned node. diff --git a/docs/computing-clusters.md b/docs/computing-clusters.md index 20e59ee..84b33a4 100644 --- a/docs/computing-clusters.md +++ b/docs/computing-clusters.md @@ -1,25 +1,26 @@ # Computing clusters - ## Computing for IGE users exclusively -If you need fast access to a small computing ressources or if you want to run matlab or other programs without any time limits, you can have access to IGE cluster, which was set up for that -The {ref}`following documentation` will guide you through the different steps you need to know to start computing immediately +If you need fast access to a small computing ressources or if you want to run matlab or other programs without any time limits, you can have access to IGE cluster, which was set up for that. + +The {ref}`following documentation` will guide you through the different steps you need to know to start computing immediately. ## Computing for IGE/External users -Gricad infrastructure provides a lot of ressources for all the labs of Grenoble University. It includes CPU/GPU ressources +Gricad infrastructure provides a lot of ressources for all the labs of Grenoble University. It includes CPU/GPU ressources. + +For now the most used cluster is dahu which is used for CPU ressources. -For now the most used cluster is dahu which is used for CPU ressources +Gricad is open for any student/researcher/engineer. If you have a contract with Grenoble University, you have already an internal account defined by Agalan Credentials. If you are outside the campus and even outside France, you will get an external account as long as you have an institutional email. -Gricad is open for any student/researcher/engineer. If you have a contract with Grenoble University, you have already an internal account defined by Agalan Credentials. If you are outside the campus and even outside France, you will get an external account as long as you have an institutional email -For both cases you will need to log in to a special interface and create an account anyway to join a project (more details follow) +For both cases you will need to log in to a special interface and create an account anyway to join a project (more details follow). -@IGE, we are a providing a {ref}`straighforward documentation`, to start computing rapidly +@IGE, we are a providing a {ref}`straighforward documentation`, to start computing rapidly. -You can also have a look to [a more detailled documentation from Gricad](https://gricad-doc.univ-grenoble-alpes.fr/hpc/) +You can also have a look to [a more detailled documentation from Gricad](https://gricad-doc.univ-grenoble-alpes.fr/hpc/). -As for GPU ressources and IA computing, there is [another cluster named bigfoot](https://gricad-doc.univ-grenoble-alpes.fr/hpc/joblaunch/job_gpu/) +As for GPU ressources and IA computing, there is [another cluster named bigfoot](https://gricad-doc.univ-grenoble-alpes.fr/hpc/joblaunch/job_gpu/). Some tools are described in the following pages : - {ref}`vscode` diff --git a/docs/getting-started.md b/docs/getting-started.md index 701071d..64c6dfa 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -4,19 +4,19 @@ Welcome to IGE! Here you can find useful information about how we conduct scientific analysis, and ressources to get you up to speed with your fellow colleagues. - - We generally use GNU/Linux and MacOS distributions, so a prerequisiste is to know basic Unix Shell commands. If it is not your case, have a look at this [tutorial](https://swcarpentry.github.io/shell-novice/) + - We generally use GNU/Linux and MacOS distributions, so a prerequisiste is to know basic Unix Shell commands. If it is not your case, have a look at this [tutorial](https://swcarpentry.github.io/shell-novice/). - If you use a mainstream GNU/Linux distribution such as Ubuntu or Fedora, you can install most of the basic scientific software you may need (such as the NetCDF libraries) via the system's package manager (`apt-get`, `dns`, etc.). If you are on MacOS, [brew](https://brew.sh/) is a good way to manage this task. - - A quick way to back-up and access your work (scripts, notebooks, text files, etc.) from anywhere is to create a [github account](https://github.com/) and to synchronize your work there. Learn how to do it [here](https://github.com/meom-group/tutos/blob/master/git-github.md) + - A quick way to back-up and access your work (scripts, notebooks, text files, etc.) from anywhere is to create a [github account](https://github.com/) and to synchronize your work there. Learn how to do it [here](https://github.com/meom-group/tutos/blob/master/git-github.md). - - If you work with Python, we recommend that you use [mamba](https://mamba.readthedocs.io/en/latest/user_guide/mamba.html) (a seamless/drop-in replacement for [conda](https://docs.conda.io/en/latest/)) to manage your librairies via environments. Learn how to do it [here](clusters/Tools/micromamba.md) + - If you work with Python, we recommend that you use [mamba](https://mamba.readthedocs.io/en/latest/user_guide/mamba.html) (a seamless/drop-in replacement for [conda](https://docs.conda.io/en/latest/)) to manage your librairies via environments. Learn how to do it [here](clusters/Tools/micromamba.md). - - Some like to use [jupyter notebooks](https://jupyter.org/) to have code and corresponding plots in the same document, along with some text, equations and/or pictures. Learn how to install and use them [here](jupyter.md) + - Some like to use [jupyter notebooks](https://jupyter.org/) to have code and corresponding plots in the same document, along with some text, equations and/or pictures. Learn how to install and use them [here](jupyter.md). - You may want to access some remote computers in order to have more ressources than on your personnal laptop, or to access specific datasets. For this you have several options: - - the lab servers ige-calcX allow you to access some CPUs and GPUs. Learn how to access and use them [here](clusters/Ige/ige-calcul1.md) - - the Grenoble meso computing center [GRICAD](https://gricad.univ-grenoble-alpes.fr/) for bigger computations. Learn how to use this resource [here](clusters/Gricad/dahu.md) - - the national supercomputers located at [IDRIS](http://www.idris.fr/), [CINES](https://www.cines.fr/) or [CEA](https://www-hpc.cea.fr/fr/complexe/tgcc-JoliotCurie.htm) for even bigger computations [tutorials to come soon] + - The lab servers ige-calcX allow you to access some CPUs and GPUs. Learn how to access and use them [here](clusters/Ige/ige-calcul1.md). + - The Grenoble meso computing center [GRICAD](https://gricad.univ-grenoble-alpes.fr/) for bigger computations. Learn how to use this resource [here](clusters/Gricad/dahu.md). + - The national supercomputers located at [IDRIS](http://www.idris.fr/), [CINES](https://www.cines.fr/) or [CEA](https://www-hpc.cea.fr/fr/complexe/tgcc-JoliotCurie.htm) for even bigger computations [tutorials to come soon]. -A list of commonly-used software and links to useful resources is available [here](https://github.com/meom-group/tutos/blob/master/software.md) +A list of commonly-used software and links to useful resources is available [here](https://github.com/meom-group/tutos/blob/master/software.md). diff --git a/docs/index.md b/docs/index.md index 076c780..ffa5292 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,9 +2,9 @@ Welcome to IGE Computing ressources. -This is the entry point to our user documentation. The following pages list the resources available for the platform, both internal and external, and explain access, usage and tips. These may relate to the many topics covered by the group, such as data warehouses, development tools, computation servers, modeling workflow and so on. +This is the entry point to our user documentation. The following pages list the resources available for the platform, both internal and external, and explain access, usage and tips. These may relate to the many topics covered by the group, such as data warehouses, development tools, computation servers, modeling workflow and so on. ```{tableofcontents} ``` -Everything concerning the platform's organization and communication can be found on [this other repository](https://github.com/ige-calcul/private-docs). +Everything concerning the platform's organization and communication can be found on [this (private) repository](https://github.com/ige-calcul/private-docs).