Skip to content

Sirepo Jupyter GPU Environment

Ben Gur edited this page Apr 3, 2023 · 2 revisions

GPU-Jupyter

This is a Jupyter server environment that has 4 available GPU's for development and experimentation. There are a few things to know regarding access and use.

Register with Github

Access GPU-Jupyter at https://gpu-jupyter.radiasoft.org/. There you'll be prompted to register with Github, please do so with your Radiasoft linked account.

Once your request has been approved you can access GPU-Jupyter with the same link.

As GPU-Jupyter is a shared server, you will see a folder with your name under /home/vagrant/jupyter/StaffScratch. It is highly preferred that you work out of your assigned directory.

Join the Slack

Before using GPU-Jupyter, search for #gpu_jupyter and join the channel. Notice that the discussion topic lists the four available GPU's and their availability: (0) empty (1) empty (2) empty (3) empty

Since this server is used across the company we reserve GPU's for use to avoid conflicting processes. Before running any applications or notebooks on the server please check for an available GPU and reserve it by replacing 'none' with your name in the topic.

If none are available feel free to message the channel to ask about availability. People usually know when they will be done with a GPU and occasionally forget to update availability after concluding use.

Setting GPU visibility

With a GPU reserved, the next step is to configure it as your visible GPU. To do so, set the CUDA_VISIBLE_DEVICES variable to the desired device ID.

Device ID's are labelled 0-3, the full list can be seen using the nvidia-smi command.

Set variable in script

At the top of your script, add the following:

export CUDA_VISIBLE_DEVICES=0

After running this also sets the variable for the life of the current shell.

Set variable on execution

To set the visible GPU on the command line, add it to the executable call:

CUDA_VISIBLE_DEVICES=1 ./cuda_executable

This sets the variable for the lifespan of that particular executable invocation.

Running on multiple GPUs

With both methods CUDA_VISIBLE_DEVICES can be set to a comma-separated list:

export CUDA_VISIBLE_DEVICES=0,1, CUDA_VISIBLE_DEVICES=0,1 ./cuda_executable

Concluding use

When finished using GPU-Jupyter please remember to shutdown all kernels and terminals to prevent any notebooks or applications from unintentionally running processes on the GPU.

This should be done after both daily and final use of the GPU.

To shut down both kernels and terminals, click "Shutdown all" on the "Running terminals and kernels" tab on the lefthand menu. You may also shutdown kernels by clicking "Shutdown all kernels" in the "Kernel" drop down menu.

Clone this wiki locally