diff --git a/dockerhub-readme/generated-readmes/ngc-rapidsai-core-dev.md b/dockerhub-readme/generated-readmes/ngc-rapidsai-core-dev.md index f2dbddca..095e8b07 100644 --- a/dockerhub-readme/generated-readmes/ngc-rapidsai-core-dev.md +++ b/dockerhub-readme/generated-readmes/ngc-rapidsai-core-dev.md @@ -39,7 +39,7 @@ This repo (rapidsai/rapidsai-core-dev), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-devel-ubuntu18.04-py3.10 +23.02-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -52,8 +52,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -63,16 +63,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -111,7 +111,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -134,7 +134,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -162,14 +162,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/ngc-rapidsai-core.md b/dockerhub-readme/generated-readmes/ngc-rapidsai-core.md index c6ee281a..8c863790 100644 --- a/dockerhub-readme/generated-readmes/ngc-rapidsai-core.md +++ b/dockerhub-readme/generated-readmes/ngc-rapidsai-core.md @@ -39,7 +39,7 @@ The [rapidsai/rapidsai-core-dev](https://catalog.ngc.nvidia.com/orgs/nvidia/team The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -50,16 +50,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04 +cuda11.8-runtime-ubuntu22.04 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -69,16 +69,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -117,7 +117,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -140,7 +140,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -168,14 +168,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/ngc-rapidsai-dev.md b/dockerhub-readme/generated-readmes/ngc-rapidsai-dev.md index bdad0006..4a296eb6 100644 --- a/dockerhub-readme/generated-readmes/ngc-rapidsai-dev.md +++ b/dockerhub-readme/generated-readmes/ngc-rapidsai-dev.md @@ -39,7 +39,7 @@ This repo (rapidsai/rapidsai-dev), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-devel-ubuntu18.04-py3.10 +23.02-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -52,8 +52,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -63,16 +63,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -111,7 +111,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -134,7 +134,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -162,14 +162,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/ngc-rapidsai.md b/dockerhub-readme/generated-readmes/ngc-rapidsai.md index 461c97a2..035d6908 100644 --- a/dockerhub-readme/generated-readmes/ngc-rapidsai.md +++ b/dockerhub-readme/generated-readmes/ngc-rapidsai.md @@ -39,7 +39,7 @@ The [rapidsai/rapidsai-dev](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rap The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -50,16 +50,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04 +cuda11.8-runtime-ubuntu22.04 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -69,16 +69,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -117,7 +117,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -140,7 +140,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -168,14 +168,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + nvcr.io/nvidia/rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-core-dev-nightly.md b/dockerhub-readme/generated-readmes/rapidsai-core-dev-nightly.md index 38938dee..a9f30142 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-core-dev-nightly.md +++ b/dockerhub-readme/generated-readmes/rapidsai-core-dev-nightly.md @@ -46,7 +46,7 @@ This repo (rapidsai/rapidsai-core-dev-nightly), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.04-cuda11.8-devel-ubuntu18.04-py3.10 +23.04-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -59,8 +59,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -70,16 +70,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -118,7 +118,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -141,7 +141,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -169,14 +169,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-core-dev.md b/dockerhub-readme/generated-readmes/rapidsai-core-dev.md index 20a70122..4dd1b8fb 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-core-dev.md +++ b/dockerhub-readme/generated-readmes/rapidsai-core-dev.md @@ -39,7 +39,7 @@ This repo (rapidsai/rapidsai-core-dev), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-devel-ubuntu18.04-py3.10 +23.02-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -52,8 +52,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -63,16 +63,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -111,7 +111,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -134,7 +134,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -162,14 +162,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-core-nightly.md b/dockerhub-readme/generated-readmes/rapidsai-core-nightly.md index e3bb943d..073b1643 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-core-nightly.md +++ b/dockerhub-readme/generated-readmes/rapidsai-core-nightly.md @@ -46,7 +46,7 @@ The [rapidsai/rapidsai-core-dev-nightly](https://hub.docker.com/r/rapidsai/rapid The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -57,16 +57,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04-py3.10 +cuda11.8-runtime-ubuntu22.04-py3.10 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -76,16 +76,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -124,7 +124,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -147,7 +147,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -175,14 +175,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-core.md b/dockerhub-readme/generated-readmes/rapidsai-core.md index d29be923..6342462d 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-core.md +++ b/dockerhub-readme/generated-readmes/rapidsai-core.md @@ -39,7 +39,7 @@ The [rapidsai/rapidsai-core-dev](https://hub.docker.com/r/rapidsai/rapidsai-core The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -50,16 +50,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04-py3.10 +cuda11.8-runtime-ubuntu22.04-py3.10 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -69,16 +69,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -117,7 +117,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -140,7 +140,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -168,14 +168,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-dev-nightly.md b/dockerhub-readme/generated-readmes/rapidsai-dev-nightly.md index 54831089..8928ef30 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-dev-nightly.md +++ b/dockerhub-readme/generated-readmes/rapidsai-dev-nightly.md @@ -46,7 +46,7 @@ This repo (rapidsai/rapidsai-dev-nightly), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.04-cuda11.8-devel-ubuntu18.04-py3.10 +23.04-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -59,8 +59,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -70,16 +70,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -118,7 +118,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -141,7 +141,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -169,14 +169,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev-nightly:23.04-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-dev.md b/dockerhub-readme/generated-readmes/rapidsai-dev.md index 9ca34051..582ce1b0 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-dev.md +++ b/dockerhub-readme/generated-readmes/rapidsai-dev.md @@ -39,7 +39,7 @@ This repo (rapidsai/rapidsai-dev), contains the following: The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-devel-ubuntu18.04-py3.10 +23.02-cuda11.8-devel-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -52,8 +52,8 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -63,16 +63,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Container Ports @@ -111,7 +111,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -134,7 +134,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -162,14 +162,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu18.04-py3.10 + rapidsai/rapidsai-dev:23.02-cuda11.8-devel-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai-nightly.md b/dockerhub-readme/generated-readmes/rapidsai-nightly.md index d2b8aefd..12a2ad6a 100644 --- a/dockerhub-readme/generated-readmes/rapidsai-nightly.md +++ b/dockerhub-readme/generated-readmes/rapidsai-nightly.md @@ -46,7 +46,7 @@ The [rapidsai/rapidsai-dev-nightly](https://hub.docker.com/r/rapidsai/rapidsai-d The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -57,16 +57,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04-py3.10 +cuda11.8-runtime-ubuntu22.04-py3.10 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -76,16 +76,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -124,7 +124,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -147,7 +147,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -175,14 +175,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai-nightly:23.04-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/generated-readmes/rapidsai.md b/dockerhub-readme/generated-readmes/rapidsai.md index b2ea5836..97c5ad90 100644 --- a/dockerhub-readme/generated-readmes/rapidsai.md +++ b/dockerhub-readme/generated-readmes/rapidsai.md @@ -39,7 +39,7 @@ The [rapidsai/rapidsai-dev](https://hub.docker.com/r/rapidsai/rapidsai-dev/tags) The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ^ ^ ^ ^ ^ | | type | python version | | | @@ -50,16 +50,16 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA 11.8, Python 3.10, and Ubuntu 18.04, use the following tag: ``` -cuda11.8-runtime-ubuntu18.04-py3.10 +cuda11.8-runtime-ubuntu22.04-py3.10 ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu18.04-py3.10`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda11.8-runtime-ubuntu22.04-py3.10`. ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -69,16 +69,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 +$ docker pull rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Container Ports @@ -117,7 +117,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Bind Mounts @@ -140,7 +140,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` ### Use JupyterLab to Explore the Notebooks @@ -168,14 +168,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu18.04-py3.10 + rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10 ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/dockerhub-readme/templates/base.md.j2 b/dockerhub-readme/templates/base.md.j2 index 2e7f749c..7676ede6 100644 --- a/dockerhub-readme/templates/base.md.j2 +++ b/dockerhub-readme/templates/base.md.j2 @@ -65,7 +65,7 @@ The [rapidsai/{{ repo_name | br2devel }}]({% if not is_ngc %}https://hub.docker. The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below: ``` -{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{DEFAULT_PYTHON_VERSION }} +{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{DEFAULT_PYTHON_VERSION }} ^ ^ ^ ^ ^ | | type | python version | | | @@ -77,17 +77,17 @@ The tag naming scheme for RAPIDS images incorporates key platform details into t {% if is_br %} To get the latest RAPIDS version of a specific platform combination, simply exclude the RAPIDS version. For example, to pull the latest version of RAPIDS for the `runtime` image with support for CUDA {{ DEFAULT_CUDA_VERSION }}, Python {{ DEFAULT_PYTHON_VERSION }}, and Ubuntu 18.04, use the following tag: ``` -cuda{{ DEFAULT_CUDA_VERSION }}-runtime-ubuntu18.04{{ "-py"+DEFAULT_PYTHON_VERSION if not is_ngc }} +cuda{{ DEFAULT_CUDA_VERSION }}-runtime-{{ DEFAULT_LINUX_VERSION }}{{ "-py"+DEFAULT_PYTHON_VERSION if not is_ngc }} ``` -Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda{{ DEFAULT_CUDA_VERSION }}-runtime-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }}`. +Many users do not need a specific platform combination but would like to ensure they're getting the latest version of RAPIDS, so as an additional convenience, a tag named simply `latest` is also provided which is equivalent to `cuda{{ DEFAULT_CUDA_VERSION }}-runtime-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }}`. {% endif %} ## Prerequisites - NVIDIA Pascal™ GPU architecture or better -- CUDA [11.2/11.4/11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver -- Ubuntu 18.04/20.04 or CentOS 7 or Rocky Linux 8 +- CUDA [11.2/11.4/11.5/11.8](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver +- Ubuntu 20.04/22.04 or CentOS 7 or Rocky Linux 8 - Docker CE v18+ - [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) @@ -97,16 +97,16 @@ Many users do not need a specific platform combination but would like to ensure #### Preferred - Docker CE v19+ and `nvidia-container-toolkit` ```bash -$ docker pull {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} +$ docker pull {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash -$ docker pull {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} +$ docker pull {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` ### Container Ports @@ -150,7 +150,7 @@ $ docker run \ -p 8888:8888 \ -p 8787:8787 \ -p 8786:8786 \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` ### Bind Mounts @@ -173,7 +173,7 @@ $ docker run \ -it \ --gpus all \ -v $(pwd)/environment.yml:/opt/rapids/environment.yml \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` ### Use JupyterLab to Explore the Notebooks @@ -201,14 +201,14 @@ You are free to modify the above steps. For example, you can launch an interacti ```bash $ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` #### Legacy - Docker CE v18 and `nvidia-docker2` ```bash $ docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \ -v /path/to/host/data:/rapids/my_data \ - {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-ubuntu18.04-py{{ DEFAULT_PYTHON_VERSION }} + {{ ngc_prefix if is_ngc }}rapidsai/{{ repo_name }}:{{ repo_rapids_version }}-cuda{{ DEFAULT_CUDA_VERSION }}-{{ repo_default_img_type }}-{{ DEFAULT_LINUX_VERSION }}-py{{ DEFAULT_PYTHON_VERSION }} ``` This will map data from your host operating system to the container OS in the `/rapids/my_data` directory. You may need to modify the provided notebooks for the new data paths. diff --git a/settings.yaml b/settings.yaml index 44d53b1b..1ebc8fb5 100644 --- a/settings.yaml +++ b/settings.yaml @@ -2,6 +2,7 @@ # Default Docker build arguments DEFAULT_PYTHON_VERSION: "3.10" DEFAULT_CUDA_VERSION: "11.8" +DEFAULT_LINUX_VERSION: "ubuntu22.04" DEFAULT_STABLE_RAPIDS_VERSION: "23.02" DEFAULT_NIGHTLY_RAPIDS_VERSION: "23.04" DEFAULT_NEXT_RAPIDS_VERSION: "23.06"