-
Notifications
You must be signed in to change notification settings - Fork 32
Sirepo Jupyter GPU Environment
This is a Jupyter server environment that has 4 available GPU's for development and experimentation. There are a few things to know regarding access and use.
Access GPU-Jupyter at https://gpu-jupyter.radiasoft.org/. There you'll be prompted to register with Github, please do so with your Radiasoft linked account.
Once your request has been approved you can access GPU-Jupyter with the same link.
As GPU-Jupyter is a shared server, you will see a folder with your name under /home/vagrant/jupyter/StaffScratch
. It is highly preferred that you work out of your assigned directory.
Before using GPU-Jupyter, search for #gpu_jupyter and join the channel. Notice that the discussion topic lists the four available GPU's and their availability: (0) empty (1) empty (2) empty (3) empty
Since this server is used across the company we reserve GPU's for use to avoid conflicting processes. Before running any applications or notebooks on the server please check for an available GPU and reserve it by replacing 'none' with your name in the topic.
If none are available feel free to message the channel to ask about availability. People usually know when they will be done with a GPU and occasionally forget to update availability after concluding use.
With a GPU reserved, the next step is to configure it as your visible GPU. To do so, set the CUDA_VISIBLE_DEVICES variable to the desired device ID.
Device ID's are labelled 0-3, the full list can be seen using the nvidia-smi
command.
At the top of your script, add the following:
export CUDA_VISIBLE_DEVICES=0
After running this also sets the variable for the life of the current shell.
To set the visible GPU on the command line, add it to the executable call:
CUDA_VISIBLE_DEVICES=1 ./cuda_executable
This sets the variable for the lifespan of that particular executable invocation.
With both methods CUDA_VISIBLE_DEVICES
can be set to a comma-separated list:
export CUDA_VISIBLE_DEVICES=0,1
,
CUDA_VISIBLE_DEVICES=0,1 ./cuda_executable
When finished using GPU-Jupyter please remember to shutdown all kernels and terminals to prevent any notebooks or applications from unintentionally running processes on the GPU.
This should be done after both daily and final use of the GPU.
To shut down both kernels and terminals, click "Shutdown all" on the "Running terminals and kernels" tab on the lefthand menu. You may also shutdown kernels by clicking "Shutdown all kernels" in the "Kernel" drop down menu.
License: http://www.apache.org/licenses/LICENSE-2.0.html
Copyright ©️ 2015–2020 RadiaSoft LLC. All Rights Reserved.
- Activait
- Controls
- elegant
- FLASH
- Genesis
- JSPEC
- JupyterHub
- MAD-X
- OPAL
- Radia
- Shadow
- Synchrotron Radiation Workshop (SRW)
- Warp PBA
- Warp VND
- Zgoubi
- Authentication and Account Creation
- How Your Sirepo Workspace Works
- Navigating the Sirepo Simulations Interface
- How to upload a lattice file
- How to share a Sirepo simulation via URL
- How Example simulations work
- How to report a bug in Sirepo
- Using lattice files in Sirepo
- Resetting an Example Simulation to default
- Backup SRW Sirepo simulations
- SRW Aperture
- SRW Brilliance Report
- SRW Circular Cylinder Mirror
- SRW CRL
- SRW Crystal
- SRW Electron Beam
- SRW Elliptical Cylinder Mirror
- SRW Fiber
- SRW Flux
- SRW Fully Coherent Gaussian Beam
- SRW Import Python or JSON Simulation File
- SRW Initial Wavefront Simulation Grid
- SRW Intensity Report
- SRW Planar Mirror
- SRW Power Density Report
- SRW Propagation Parameters
- SRW Single Electron Spectrum Report
- SRW Spherical Mirror
- SRW Toroid Mirror
- SRW Watchpoint
- SRW Additional Documentation